Speaker 1 0:02
Dave, Hi, I'm Dave McMorran, director of sales engineering, here at litmus joined with me is Hossein galamorad. He's on my team, and he's gonna be driving for most of the time. And one of the questions I had for you, since we were Walker, brought up scale, and we were just, we're here to talk about scale. Who here has multiple plants that they're in charge of or has to implement. Raise the hands. Great. Has anybody here struggled with scale at this point? And when I talk about scale, I'm talking about multiple plans. Okay, great. Because if you haven't, you will all right, so what we're going to do today, we're going to talk about that, so I'll do an intro, if we can just go through the slides. So there we go. So again, who here is today? Our booth is right outside here. We've got our CEO, Vatsal Shah, here today. He'll be speaking a little bit with a new announcement. John Yanis, our CEO, Adam Kennedy, our VP of global sales, myself, Hussain and also will Knight, who many of you know, if you're an SI, he's here as well. So feel free to freeze to stop by our booth and introduce yourselves. And so a little bit about who we are. We're global company. We have offices all over the world. We were established in 2015 we are backed by Google, Belden and Mitsubishi, and headquartered in Silicon Valley, so a global presence. And when we're going after large customers, they usually have a global presence as well. So that helps us cover the globe. We are really we say this, and a lot of companies say about customer success, we really are. So we're really working with customers and gaining their trust and being consultants and working with them as well. We're deployed in 1000s of locations all around the world, 1000s of edge deployed deployments, and we've done a lot of discrete as well as process industry. So we're touching a lot of different industries as well. So you can see some of the names there. So one of our tags legs, or latest tag leg now, is unlock, activate. Scale. What does that mean, right? Unlock is okay, I've got silos of data. We've heard this before. We've got silos of data. How do we get that data, and not just from PLCs, but what about other devices, DCS systems, databases, SCADA systems, anything you have in that shop floor. How are we getting that data in? So it's like basically unlocking, getting access to that data and normalizing it. Then we talked about activate. Okay, we've got the data. We've got some raw data. Now, what do we do with it? Let's do some analytics on it. Let's actually do some math on that data. Let's start working with that data at the edge before it goes to the next step. So we're really trying to contextualize, normalize, create a scalable solution, no matter what the environment is, and then scale like we're talking about. And scale means a lot to different, lot of things to different people. So when we talk about scale, we're talking about from plant to plant to plant. We can handle millions of tags. That's not the issue. We can handle the deployment of of data going through uns, etc. But when we talk about scale. We're really focused on the I did it in one plant. Let's replicate it in others, and that's what we're gonna focus on today, during the demo. All right? So we have basically three products. We have our litmus edge, right? That is the place that has the drivers built in, hundreds of drivers. We can standardize and normalize the data. We can contextualize the data. We can do some analytics on it, we can run containers and other stuff, which we'll get into, and then we can integrate it to other systems. The second project is really the key to the scale part of it, which is litmus edge Manager. This is where you can then connect all the litmus edge you have deployed globally and pull data from them, or pull templates from them, push templates, containers, et cetera. So we'll get into that as well. And then the third product, which we released not too long ago is our litmus uns. That's an MQT broker, auto scale, auto heal, high availability, et cetera, with a data governance layer on it. We'll touch on that as well. So we're gonna just get right into the demo. We don't have a lot of time, and I've got a lot to get through, but you can see that scale by design part actually, can we back up? Sorry, sorry everybody. But this is talking about how we can actually, we designed this product to scale from the beginning. So go ahead on the next slide. And what I wanted to mention here is it's not just connecting to the OT layer and connected to a PLC, but it's like all the way through the site. So you'll see such architectures where we're collecting data, maybe from the layer two, we're normalizing that data, getting to layer three, maybe that's connected to another system in layer three to pull data, and then working with the line data, working with other databases, and all the way up that stack. So as the data is going up that chain, up that pyramid here your P or your cloud, or whatever you're doing, you're starting to add value to that data. So that's the way we look at it. So it's not just in the OT space, it's going up all the way up the stack. Great. So let now, let's jump into the demo. So the first thing we're going to do is we're. Role play a little bit. So Hussein and I were basically hired by a printing company as the new digital transformation team and the CTO. What she told us was that they've tried other solutions before. They worked fine, but we had problems with scaling. So whatever you guys do, make sure you can scale the solution. So that's we did. And we went and chose litmus. So the first thing we're gonna do is actually put install litmus edge in Dallas, and we are gonna connect to the data sources. The three use cases that she wanted us to do were OE monitoring, machine health monitoring. We're gonna do downtime reporting application. And then she also wanted to publish data up to the enterprise uns that we had installed in the cloud. So that'd be the enterprise litmus uns. So that's kind of the first step. So Hossein and I, we got Dallas installed, we logged in. So you'll notice here that we're using a web browser. That's not because we're in the cloud, that's because we're on prem. We use a modern tech stack, so we're using a browser to log in, you can see here's a dashboard on that we can install as bare metal, a VM or as a container in a Kubernetes cluster. So there's a lot of flexibility in it. And what we're going to first going to do is go to devices and connect that data. So Hosein, I'm going to hand it over to you and just talk about how we did the connection through our device hub
Speaker 2 6:19
product. Yeah, absolutely. Thanks, Dave. So as consultants, we wanted to make sure that we have a platform that will give us access to hundreds of drivers and litmus has that you have the ability to connect to the device. Add some metadata here. Pull some tags. You'll notice that we have tags that have the full name. They will refer to that in a second here. And then we have the ability to browse tags here as well. So you can see that you actually have the ability to choose the tags that you want and then add it to your shopping cart and then basically add it. Great. Thanks for saying so
Speaker 1 6:55
one of the things we were able to do is actually browse the MQT, the UNs broker that was already there, see what's going on from within the product, and go select the data we wanted. So right away we were able to easily get to that data. The other thing we did is we left the tag name to be the the MTT topic, because we want to use that data as metadata in our in our next step. So what we then decided to do is like we've got the raw data, we're ingesting it now, we want to create a data model that allows us to work with that data to create those use cases. So we use our digital twin feature, which is a data modeling feature, within the product we built, and we decided to go with an equipment model. So the first step is to actually build the model. Right here you'll see the static data. This is what's going to be added to use for the governance layer and the litmus uns. So we'll get into that. It'll make a little bit more sense in a bit. But the idea here is that it's going to match the UNs that we created and match that topic structure. And if it doesn't, we're going to put it into a different bucket than the and then the conforming data, and then the transformation. This is actually pretty neat. So what we're doing is we're taking the tag name was that empty, which is that MQT topic, and we are parsing that data out and adding that to our digital twin model to add that metadata, because that's going to make it really easy for us to search and query the data after we land it in the database. So we're going to land it in a database that's built into the product already as well. And then we'll show you some other ways to store data too. The dynamic attributes. This is where you map the actual raw data. So we're going to map like the OEE, the PA and the queue and other data in there and get that. So what we've done here is, oh, then we have the schema. So then you can start building whatever schema you want. So you can customize a schema to build your twin depending on what you're trying to do with the use case, and add whatever you want here. So once we have the model built, we can then deploy that. So we deploy instances of it. And what we did here is we'll look at the laminator. We've added some of that uns data to make sure we conform to the UNs structure, right then we go to the dynamic attributes, that's where we map the data, the data that we want to get into the model. And then we can actually look at the model that we built. We'll subscribe to it. And you can see here we're actually live updating the data model that we just built, which has all the data we want. So this is the model that within the system we are now going to start working with that data. So to do the use cases, we're going to jump to our analytics feature in the product. And we did very simple use cases, because it's not about the use cases today, it's about scale. So we're going to keep this pretty simple. First thing we're going to do is the CTO she wanted, moving cape, a moving maximum of the availability as a metric. So winter analytics, it's a low code, no code, type of product, configure it, get the data out, pass that model through. We're going to calculate that that metadata is being passed through, and we're just stored in our database that store, or their inbuilt database on the litmus edge itself. Okay? Downtime reporting, this is basically just an event driven so when the machine starts and stops, we're basically triggering a message to another data model that we're doing the calculations for the the downtime reporting. Model. And then the third use case is basically just filtering out the data. So if we're getting data in from the UNs, for instance, like no data as a string, but it's supposed to be a number, we're going to convert that to a null so we can store it into our inbuilt time series database as well. So again, basic use cases, if anybody's interested or boost right out there. Be happy to drill into this, into more detail, but any of that stuff, so please stop by the booth and happy to get into this. But for time's sake, we're going to kind of rush through this. So the next thing is, we've got our use cases built, restoring the data locally. We are going to basically then need to visualize, visualize, visualize it. So also in the litmus edge is a Docker engine, so what we can do is actually run containers in that in the litmus edge. So we chose Grafana as our visualization tool. Works nicely with us. It's free. We use that all the time, but you can use any visualization you want. You can also download databases for this use case. We're going to do MS SQL as well, but you can run custom Python code. You can run databases, visualization tools, ml, whatever you want, if you can write in any language. Walker talked about having low code, no code, which is not always a solution, and I would agree, right? It's great to have it, but you need the ability to do some pretty intense stuff, and having a Docker container and bus showing you how to integrate that in allows you to program in any language you want and use any any kind of application you want, and really extend the capabilities of the litmus edge. So I think that was a great point. So what containers do we run? So here we are. We're showing Grafana and MS SQL. So the MS SQL database is storing the downtime report data, and Grafana is going to show the dashboard. So we'll just jump into Grafana. So this is the Grafana dashboard that's running in litmus edge, tied to our database. We'll look at the OE machine level, right. Factory is not running that great, but data. But again, it wasn't about the data. It's about scale, so we'll get to that. You can see that we have that moving maximum availability number at the top, and then the rest of the data just passing that through, and then on the downtime reporting. What you're going to see here is basically a Pareto chart with a table showing the data and like, just downtime. So again, not difficult use cases, but I like how we can when we store the data with that, that metadata from where it's located, it makes it really easy for us to query on line, machine shift, job jacket and all those kinds of things. So when you have that context, it just makes these use cases so much easier to do. Okay, the other the third use case was to publish those data models up to the litmus uns. So we have our integration section here integrations. We have integrations to message brokers, databases, all the cloud vendors, systems like Databricks cognite as well. So we're always coming up with new integrations and new drivers all the time. But you'll see here, we'll go into litmus uns. We make that really easy to integrate to our litmus uns. You'll see that we have the six data models that we built before publishing that out. I had Jose and show empty explorer, just because I know everybody here is familiar with it and comfortable with it. So here's the data, and you can see that we're sending it up to the Dallas and we set those data models up to our enterprise uns. Okay, so those are the use cases. Oh, let's talk about uns, since we're talking about it. So here's our uns again, browser based. Everything we do is browser based. So no need of our development tools to configure anything. Here's a dashboard just showing you clients, connected data, throughput, those kinds of things. The governance layer is where you set up your uns. We go to that. So when you configure it, you can actually set up your uns. It actually goes six levels deep. We're only showing four here,
Speaker 1 13:52
based on what our uns structure is. You'll see that we're putting now rules in that. So if somebody comes in and types in fat fingers, Dallas or a word, it's going to go into a non conforming bucket. So you can set up rules for you, and that's to make sure you're matching that. So if we go to explorer, you'll actually see the data coming out from Dallas for the Pruvit. So we're following that structure. You see all those data models. If we actually want to go look at that data, I don't need MQT explorer. We've got an MQT client built in. I just pop over here and see the data to make sure I'm getting the data in its right format. So really quickly, we can actually create a uns from the driver side all the way up. In this case, it happened to be an empty broker. But we can actually, we'll show you how we do PLCs directly to a uns as well. Okay, so now we've got those use cases done. Now we're going to talk about scale, right? The fun part, okay, so I talked about the three products we have. Let's talk about the second product, litmus, Edge manager, right? So what we're going to do is we went to the CTO, we showed her all the use cases. She was really happy about it. It's like, okay, that's fine, but I've seen that before. I want to now do that in two more plants. I want to do it in Houston and Austin now. So. Okay, they have the same setup in there, and what we're going to do is deploy that. So we're going to take those use cases in Dallas, and we're going to replicate them in Austin and Houston using that litmus edge Manager product, and that data is also going to go to the UNs. So we're actually going to see that data going into that. We're going to add that right in front of you here, so we're going to jump into the litmus edge manager, right? So the first section is to organize your data. So we called it prove it, but you can call it whatever you want. That's the first layer of hierarchy. When you drill into that, we have what we call projects. This could be line, it could be plant, name, state, whatever you want on that. We're going to talk about print factories here. All right, so you can see we've got three devices connected. This is a dashboard just telling us how we're doing and how our edge devices are doing. We can look at CPU, memory, RAM messages going through all that kind of stuff. We'll jump down to edge devices. You'll now see that we've actually got those three edge devices already deployed. So first question is like, how do I deploy those? Right? Well, you can go buy a Dell 5200 gateway and install it in there and ship it to the guys. Or you can actually use something Dell native edge, which is new they're doing the next presentation. So I encourage you to stick around for that. It's really cool tech where you can actually just buy a Dell, plug it in, and it'll actually install everything for you. But you can also run as a VM. A lot of companies have automated some of that process, of processes, so different ways of getting that data in. But you'll notice here that we've got information in Dallas, we got devices connected, but there's nothing in Houston and Austin. And just to prove it, to prove it again, we're going to go to Houston and Austin. You'll see there's no devices here, no tags flowing through, nothing going on in those two devices. Okay, so we're going to go back here, and what we're going to do is we're going to deploy a template. So what we did is we pulled the template up from Dallas, okay, and now we're going to deploy those in Austin and Houston. So what did we do on this? And I know, I know a bunch of you're going to say, all right, that, hey, no factory. No two factories are the same. They're always different. That's fine. We don't have a lot of time today, so I'm going to, I'm going to cut some corners here, but we have best practices on how to deploy at scale like that and change different factories. Stop by my booth, be happy to talk to you about it. Okay, so we're going to do is deploy those standard templates. They're JSON files. So Hossein basically open up the file that have Find Replace of Dallas and replace it with Houston, another one with Austin, and that's all we did for that template. Now we're going to deploy those down. Okay? So as he deploys those down, only takes about a minute or two to actually deploy, but that's going to what that is going to do is deploy those connections to uns, OPC, way server and database. It's going to add the tags. It's going to add the digital, digital twin models that we created, the analytics, the integrations, everything we talked about, including the Grafana MS SQL, is now deployed. So here we are in Austin. You can see that we're connected. You can see the digital twins are deployed. You can see the analytics deployed. The containers are deployed here as well, and the integration sending up data to the UNs. Okay, so if we go back to MTT explorer, real quick, right? We have Austin and Houston. So just that quickly, we were able to then deploy those templates and get the data going out to the UNs. All right. So if we go back to uns, we actually look at the data there as well. All right. So if we go and explore, you'll now see also prove it, you'll see Dallas, Austin, Houston, right? So we just basically scaled those two plants. So we went back to the CTO. She was super excited that we showed scale. Very happy about that. Said, I got another challenge for you. We've got another plant. We just bought it. We just acquired it. It's a pulp and paper plant, so it's different, right? But the problem is they don't have a uns, they don't have an OPC, WA server. What do we do? Said, well, we'll build a uns for you, and we'll do it remotely. So what we did is we used Dell native edge. We got a got a Dell or a litmus uns, and litmus edge spun up, and we connected that to litmus edge manager. So here's the architecture, so you can actually see that we've got the pulp and paper plant now to feed the print plants, and we're going to go deploy the template down there. Now, running out of time here a little bit, but we're going to, we're going to first just deploy the template, which is do that really quickly. So as we're doing that, what, what this template includes is actually 14 containers that are acting as PLC simulators. We're going to down those. It also includes the Grafana and the SQL database. It's going to include all the connectivity to it. So it's all it's PLCs that are, it's Siemens, OPC, WA and Modbus PLCs that we're going to deploy. All those are going to get sent down. We're going to do the connections, we're going to do all the templating, we're going to do all the analytics integrations, all that's being deployed here at once. So we're in limits edge manager. We're deploying it. It's just going to take a little bit of time to do that, but what we can do is we can actually go into the Waco and you'll it won't be running yet. It'll take a couple of minutes, but you'll see that we actually. Refresh the screen, you'll see, actually we've got 14 devices already added on this and data starting to flow through. But not all the not all the simulators are spun up yet. But the idea is that we can deploy all that very quickly, very easily. And if we can go back to litmus edge, another thing that we run into, or limes manager, excuse me, another thing we see is it's not just templates that we can actually maintain, right? So we can go to applications. You can actually build your own marketplace with use cases and deploy those down and push those down to the litmus edges as you want. So this could be machine learning new you create a new visualization, a new Grafana dashboard. You want to push it down, you can do that, right? So it's not just templating configuration. You can actually manage it. And then we can also do updates over the air. So one of our big customers, we did a upgrade for 80 litmus edges around the world. We did it in two hours. Okay, so we're doing a lot of management as well. There's also alert, an alerting functionality that we can actually see that maybe the US got the UNs was disconnected at the site. Let's send a Slack message to somebody to go check that out. CPUs running too high, we can make alerts around it. So there's a lot of ways we can alert it. So it's not just about scaling templates. It's also about monitoring and managing your software in day two. It's after you've all installed it. Now I want to manage it through its life cycle. So just to kind of recap this, I'm going to bring up our CEO here shortly. He's got some exciting news on some new features. We're dead. But basically, to recap it, we created three use cases in Dallas. We replicated those use cases in two more plants, and then we actually deployed a uns in in Waco. Do we show the UNs in Waco? No, just wrote, just to show you that we did it here, yeah. So you can actually start to see that the date is actually being sent up to uns. And if we go back to litmus editor real quick, just to show the PLC connections. And so this is the Waco system. You'll see that we're not connected yet because some of those containers are spinning up. Are spinning up, but for lack of time, we're just going to end it here. Happy to show you it fully running. But that's basically what's going on with that. All right. So with that being said, I'm going to invite our CEO up with vassal or vassal Shah, our CEO and founder, and he's going to talk about some exciting new features. You
Speaker 3 22:27
okay, hopefully everybody can hear me. So yes, one of the mandatory requirement was, how much does it cost? So to to set up the whole thing? It was 18,000 US dollar per year as a license. Latemas is a subscription based license on a on a yearly basis. And we have three tiers of licensing to set up the whole demonstration that you just saw. There is no cost for late massage manager. It just, it just our way to help customers scale very well. So that is, of course, if you want to talk more, come to work with as well. I have six minutes, and I'm going to go into something, something amazing that we have been working on, like over, over last. Let's have almost eight, nine years now we have been working on across the industry, food and beverage, pulp and paper. You go for automotive, tier 123, pharma industries and chemical. The whole idea that we created was we wanted to create the foundational data platform that scales across the across the industry. So this is the this is the like currently, if you go to any of the 1000s of sites that use litmus, you're going to find that customers trust litmus across this five pieces of their data journey. We are one of the best in the market for data collection. We have all the drivers. We don't sell drivers. We can collect to modern systems, legacy systems. Doesn't matter. We can connect to historian systems as soon as we get the data. We have best in class data modeling capabilities. So we can model the data in a highly dynamic way, like bring the models from OSIsoft, bring the models from asset frameworks, and everything else that is already possible. Then you saw we can analyze the data right at the edge without going to cloud, low code, no code. Environment allows you to bring your own apps or run the analytics. That was all edge. Then we introduced the component, which is litmus uns. This was one and a half years ago, and litmus edge Manager works very well to scale from one site to 1000s of sites. One of our largest customer has 12,000 remote site, energy sites that they are utilizing, lick, massage, manager on average, manufacturing site, they start with 50 all the way to 200 that we have seen successfully done it. So customers trust us for this. So it was our responsibility to understand what, how, how, and what can we do with AI that that brings more and more value on the top of the data that we have. Or how do we help our customers scale up our initiatives faster? So we started with this was all of you might actually remember the day chat GPT was dropped. It was just right after Thanksgiving, and it was like, forget about. Holidays, I want to start playing with chatgpt right now. That's what happened. And like, exactly at that time, what we saw was what just dropped right now, it's like, how does this thing actually know? And the first thing that we actually it was Sunday. I still remember. I'm a searchable engineer. How can we bring this to the industrial world? Give me like 100 ideas right now that what can we do to bring all these advancements in AI into into the data set that we have? Of course, we pioneered AI ready industrial data. We have been using that word for last five years, mlops and everything else. But the idea was that there was never enough, because it was too clunky. It is built for a consumer applications. So if you keep on thinking about it like right now, if you talk to any any CIOs, CTOs, if you talk to end users like yourself, when ai, ai comes along, there is always okay. We are going to do co pilot. We are going to do chat bots. You're not going to ask a front line worker applicant application to check with your SMS to the chat board. That's just too, too lazy. Then the next thing that consultants, they started talking about is, let's do graph database, and let's create a rather than is identify. Let's put a Graph QL, and it might just give us some insight that doesn't work. Then we jumped on a knowledge graph, again, not us, but the whole industry itself. And then we started talking about LLM, SLM, how do we fine tuning? How do we causal AI and everything else? So there is a lot of technical, technical deficit which already exists. Even if litmus gives you the most clean, contextualized, accessible data to utilize industrial AI, there are like five more steps which are required, which is not even remotely feasible to to do. So what we what we thought is we have to, if you want to ever bring this to the to the customer base that we have, we have to simplify. So we announced four products yesterday, and the idea was, these products are exclusively announced on Pruvit. We are not even putting on our website yet. So we started, we are going to show it inside, like at our booth, we have been showing since yesterday, what we decided is like, we don't want chat bots. We don't want somebody who's going to just type something and get a co pilot out of it. 95% of our customers, they actually use our this ready analytics service already. So why can't we put one block, one processor that has AI as a part of it? This is mind blowing. This. This is crazy, because previously, again, we are going to show the live demonstration, so there is no BS on this one, but the, I know Walker picked up the phone, but I idea was, we went on back to the plant floor, and we thought we started understanding that persona based AI is probably one of the best way to introduce AI in the manufacturing environment. How much time does it take to learn CNC machine? You go to trade school, you train on the site, don't touch the CNC machine for five years, then you become CNC expert. Can I? Can I actually? Can I actually bring that CNC experts knowledge from some LLM or fine tuned SLM, and just post it inside that node itself, and just give a plain text instruction that when alert comes from FANUC, I want to analyze that alert. Do it right now. So let's go open the demonstration. And no, not that one, the other one, OK, the arts and the Web user interface, OK, we are running out of time, so, OK, let me just quickly finish up the second one and the second announcement that we are making. It's really about multi agent setup itself, which is, we took the scale aspect of litmus platform, and how do we allow multi agent setup? Which is, we opened up our API services from from litmus, edge, manager, uns and everything else, and we created agents that scale. So this is the IT admin of year 2026, and we expect them to be very lazy. And what do we do? So I'm going to open up the video please. So now what we do is this is the multi agent interface that we have. It starts with litmus edge, agent, litmus uns, agent. And IT admin, global. IT admin, opened up and tell Can you deploy litmus edge on 30 sites that we have? Literally, we are deploying 30 sites in 11 seconds. Its Kubernetes cluster already exists. Del native edge cluster already exists. So supervisor went to litmus key and Kubernetes agent it created, it scanned how many instances of Kubernetes. It's available, dynamically created the Kubernetes manifest file out of it and deployed litmus edge on 30 sites live in front of you. This is the real time. No, no, like difference inside it. And then we saw that. And then what we are going to ask is go. And configure it with starter template. Again, you don't require litmus edge Manager interface that you just saw before. Now, can you deploy the starter template of unified namespace? Click Submit 3038. Seconds it took to deploy litmus edge starter template into 30 sites, and it's doing it right now. Now check the status. Now deploy Alan Bradley, and because we are running out of time, I'm just going to skip through 20 seconds of video. What we are going to do is tell go ahead and connect to Allen Bradley system on the plant floor, and connect it to unified namespace layout, configure the litmus edge system between litmus edge and litmus uns. So now it is going to go to 13 instances and dynamically put Dallas Waco and all the other sites into the unified namespace layout by just doing the agent. What does this represent is you can actually utilize the advancement in AI itself and and bring it into the agentic AI structure allow you to scale it up with just plain text, no coding required, no click, 80 click required on the website itself, nothing. And this is the is the brand new thing that we are working on. The other one, we are running out of time, so please stop by. I'll show you the CNC one as well to
Speaker 4 31:22
see spent a million dollars. Hey, don't turn the mic off. Thank you. There you go.
Walker Reynolds 31:35
So Vatsal, number one, do you guys want to show the demo? We can ask a question. He can load the demo if you want to do that. Yeah, okay, that'd be awesome. What do you call that? So
Speaker 3 31:47
the first one was workflow, AI workflow itself, which is, by utilizing the litmus edge, all the data that we already got, already contextualized. Now just put it into AI service itself, which will allow you to analyze the data.
Walker Reynolds 32:02
And number two, I wanted to say this, that one, you know, what I observe with litmus all the time is that most people don't know that litmus is kind of everywhere, and a lot of people are using it. They don't even know they're using it. You know, yesterday, Google, what Google showed in their presentation, half of what they presented there is, is litmus edge in the MD hooks that got up into Google Cloud. I mean, litmus edge is really everywhere. And I strongly encourage you to watch the Dell demo where they're going to they're going to deploy native edge litmus and a whole stack provision, a server operating system like in minutes and deploy a complete solution, not just, you know, we're doing it abstracted here, right? I mean, it's software abstraction that we're viewing, but it's going to be physical hardware, software, operating system, software and application next. But the average client who engages with litmus for the very first time, what is, what is the application? What is the most common in a 16 week engagement? I'm landing litmus edge. What am I doing? Hey, what does that journey look like? Because I think when I talk to people and they ask me about litmus, hey, you talk about litmus all the time. Walker, you know, what does the engagement look like from across the first 16 weeks. The first problem
Speaker 3 33:22
got it so like for just getting started with the product itself, it's self service trial download. Go to the website, you should be able to download the trial. There is a license. It's already loaded as a part of it. So everything works out of the box. That's one in terms of deploying on a one site itself. It takes anywhere from five hours to five days, depending on amount of data that we are collecting. So if the prerequisites they are already made, we should be able to collect 10s of 1000s of tags and everything else within five hours to five days. Type of time frame. Scale it up from one site to 10 sites. Scale it up from one site to 50 sites, anywhere from a month to three months of duration, depending on it availability. We normally work with customers on two different types of use case. The use case number one is enabling the next generation of manufacturing apps. If you want to run telep, if you want to run Google Cloud, you want to run Databricks, you want to run this modern applications on the top of the data we bridge right there. There is a fantastic announcement we are making with one of the one of the biggest cloud vendor in Hanover Messe, that there is also we are bundling. So you probably you do not even know, but you are already using litmus across, across many of these OEMs, few of the largest automation OEMs, they also license litmus. The idea is the deployment depending on the on the project itself. It takes anywhere from five, five hours.
Walker Reynolds 34:39
And one thing I did want to say is that the virtual factory, the factory that we actually created the virtual factory from, they run it very high OEE, so we had to pick a time to record when they were underperforming. So we actually recorded the plant. We had to manipulate the data to give the vendors solutions, problems to solve. Uh, many of the problems that already been solved. And so we had to record the plant in a way, and give them data that had problems in it. We knew that there were 13 to 15 problems that I wanted to see vendors see if they would identify them. And they all, in fact, were identified. Tulip showed you that yesterday, with the gap with the material handler. So this was excellent, by the way, if you want to show that demo. That'd be awesome. Yeah, live. Let
Speaker 3 35:23
me quickly show this. So this is the live demonstration of the product itself. Now, like, as a SCADA engineer, again, when I was working as a SCADA engineer on designing the system itself, my biggest enemy was a lookup table. Like, really, I, like, I made a one gap in the error code from capital M to small m, dug down the hole for like three, three days before I could find it. So like, when I got the access to this litmus AI system, the first thing was like, I'm gonna kill the lookup table out of it. That's the first, first goal that we set, so we can already collect data from all the all the CNC machines, ma Zack and everything else. So the idea was one of the default complaint that CNC machine always makes is my fee, and see my Fed rate is so high, like it's just default, like 50% of time that CNC machine keeps on making that alert. So we collected all the data, we contextualize it, and then we just passed it to this magical AI processor. It's just, you can run this processor locally with a prior SLM, or you can bring it to your Cloud Endpoint, which is private endpoint from open AI, AWS, bedrock or anywhere else. And what did I tell as a CNC expert who gives the response make it concise, that's all. What I told yesterday in our booth. People were playing anywhere from fish industry to we were playing with boilers and we are playing with compressors. You can ask it in a plain text, behave like a CNC expert. I'm not asking anything else. So the data came in. This is live, and all we are going to do is pass it in and tell it, analyze the alert for me. And it did right here. It made the call to local SLM higher than normal feed rate. It goes like this. Okay, good stuff. Now create an action plan on the top of it. So I wrote another prompt in the second one that once CNC operators identify why the feed rate is so high. Go ahead and fix it like tell me how to fix it. Plain Text. This is like mind blowing. Imagine to write this logic in previous version of litmus, they're gonna spend 15 minutes, 20 minutes or two hours, depending on the complexity of it. Right now I'm just putting it in a plain text. Tell me how to fix this alert that I got, and then I am going to see the alert itself go ahead, and it's going to give you action plan with the deadline that it's going to tell you to do it. Do it in one hour. Do it in five hours. This is day two activity to feed this speed data issue. We messed around with at least 30 different types of assets just yesterday, boilers, compressors, assembly lines, Crohn's, packaging lines and this and that it gave flawless responses for identifying the alert and fixing it. Okay, that's good. That's a summarizing. Now let's go one step further. We introduce reasoning agent as a part of the same infrastructure as well. So now we should, we were able to bring the web 2.5 and mathematical LLM models running as a marketplace application, and we feed 152 manufacturing KPIs like there are 152 from what we understand, we and uptime and downtime and everything, and we feed it as a mathematical formula. Now this local reasoning will tell you feed OE is dropping, so sure quality might be down or yield might be lower. It can go three layers deep. Okay, yield is down. For what reason? Yield is down, which asset is creating, the issue for yield is down, and how do you fix it so you can have a local reasoning which is mathematically powered as well. The first one is much more summarization and understanding, and second one is much more about reasoning with the mathematical formula. This is, this is crazy.
Walker Reynolds 38:56
The first one is, I mean it the people are gonna be way more impressed with the first one. Here's why that turns an administrative control into an engineering control, because all those things you just did there, those are administrative controls. Those are human beings overseeing interventions. Human beings do that right now, and they and human beings can't be everywhere that's on the stream. So as the event's coming through now, what we can do is take something that we codify in a procedure manual, and we can turn that into an engineering control on the data stream. I mean, that is fucking huge, bro. Like, seriously, yeah,
Speaker 3 39:37
that is huge. And, like, the whole idea that happened is there is a data to decisions done on a five blocks. That's just crazy. Like, you can, you can normally do it across three different systems. You collect the data and you store it, and then you you put your vector, like vector database on the top of it, and create chat GPT like, right here, five blocks. Give you. Data to decisions. It sends the alert to work order management system in SAP by using integrations, and work order was sent with five days plan that you are what you're supposed to do on a weekly basis to not mess around with the feed rate. Change the exact logic with any asset that you have on the plant floor, it will still work.
Walker Reynolds 40:17
Ben, so what are you calling this? If you're talking about litmus and you go, hey, I want to see what is you're asking for. We're not
Speaker 3 40:25
calling anything different. We just want AI to be part of the core data infrastructure. So I just want
Walker Reynolds 40:29
to see the AI feature in litmus edge. Yeah, okay, got it all right. AI is already
Speaker 3 40:33
analytics, and 100% of our customers will get access to this. This feature, it's we are not hiding it from anyone. It's not. And second thing that one of the highest question was, are you going to call open AI or any other third party services? No, we actually in as a part of the marketplace application. We work with Nvidia now we have a full support for olama models hosting as a part of litmus edge as well. So bring your local, fine tuned models, or large models, whatever you want to run it right inside litmus set system. It runs locally.
Speaker 5 41:11
Wow, that was incredible. Thank you. So I do want to take some time to get some of the audience questions. Hi, my own. I'm Mike. Thank you. So I'm going to kind of combine a few of these with our anchor question that we ask everyone who comes up here to present, in your opinion, what is the best alternative to accomplish all the things you just showed right? Because some aspects of litmus kind of look like hi byte, they kind of look like ignition udts. And then explain, and you don't have to call out those softwares, but explain why you think your solution is better,
Speaker 3 41:43
yeah. So there was a slight prepared for that one. So, like, what happens is,
Unknown Speaker 41:50
it was coming, yeah.
Speaker 3 41:52
Like, what happens is the best in breed approach has been going on forever. Like, when I graduate out of university, the first product that I got was capware for connectivity. You normally use this as a like first thing that you will ever do, or ignition nowadays, or then you are going to purchase another vendor, which is data ops. Type of vendor. There are five commercial vendors and at least five open source vendors that exist out there, good ones. Then you are going to store the data into some some time scale database, or time series database, or influx dB. And then you're going to analyze the data from somebody else, and then try to centrally manage up with container Kubernetes, DL, native age, what? However you want to do it this. This is one of the biggest problem in the industry right now, that multi vendor complexity does not scale very well. For enterprises, they are looking for consistent results across multiple sites all the time. My maintenance window needs to be very low. My uptime needs to be all time high when you are dealing with multiple vendors, even if they are best in breed, they are not best in bid across the across the industry itself, they are not best in breed, as I'm going to maintain my integration with this third party vendor all the time. So like litmus believes that there is a there is an approach which is single, single throat to choke, and that's us. Like we collect the data, we contextualize it, we analyze it. We took a much larger bite than we can ever chew, but our team, our approaches with the customers, consistent feedback, driven development itself. It allows us to be where we are. We are best in class for industrial foundational data platforms,
Speaker 5 43:15
awesome. Thank you. Next question, so why litmus instead of Hive MQ? Because you have your own broker technology, right? How does litmus compare with throughput?
Speaker 3 43:29
Like for we introduced one and a half years ago, litmus uns itself. Throughput is never a concern for the customers. We can go for hundreds of 1000s or millions of messages per second on a single cluster itself. One of the biggest advantage that, like our customers really wanted, was a consolidated consolidation of the architecture, so there is a single pane of video to single plane of class to manage the whole infrastructure. That was one of the first one I'm not like again, if customers already purchased hive MQ, they are. They are one of the best MQ ready broker out there. By all means, use them. My idea would be, if you are looking for a consolidated design, this is very low cost, enterprise, gate and purity broker. It costs like 10, $15,000 to just be up and running with it, full clustered support, high availability, resilience, governance, everything built in.
Speaker 5 44:17
So speaking of cost, can you actually go over that again? You did share the cost for this bill, but honestly, you showed so many features. How does the cost scale or change based on what you're using?
Speaker 3 44:30
So we figured out the pricing based on how customers are using our product. So the first layer of, let's say, when we were deploying Dallas site, that was foundational platform. That means you're just getting started, you have all the data sources, you are contextualizing, and you are pushing it further. That normally goes for $18,000 per year as a starting point. Of course, for enterprise agreements, it might go lower. It might go higher, depending on the usage itself. Then we go for once, customers are ready, and they are scaling it across five sites, 10 sites, 15 sites. And there is a second tier of plan, which is. Growth Plan and scale is normally used by enterprises who are using much more security and advanced features itself. So our pricing scale is based on how customers are getting value out of our platform.
Speaker 5 45:11
Awesome. I guess we'll do, what is the fastest data collection frequency you can handle? 100 milliseconds. Okay, and I know we're at time, so we'll do one more question. How do you leverage si to help with mapping data? Or is this something you deploy completely with your internal resources? No,
Speaker 3 45:33
so we have a huge si network anywhere from GSIs all the way to local system integrator that we work with. Will Will night. VP of partnership is right outside, once again, we work with local distributor, system indicator partners and everywhere else, if you are, if you're using us from other OEM vendors or our partner, they also comes with, comes with their own certifications and programs. Go to academic litmus Academy. There are system integrator certifications as well as end user certifications available as a part of the program.
Speaker 5 46:04
Amazing. All right, we are over time, so I know we have more questions. I highly encourage all of you to go check out their booth and keep the conversation going. I'm going to hand it back over now to my other co host, Jeff winter, over on Dell stage. One awesome.
Unknown Speaker 46:19
Awesome job, guys. Awesome. Thank you.
Unknown Speaker 46:22
You guys are great. You.