Speaker 1 0:05
I So thank you everyone for attending our presentation. I'm going to walk through a few slides. I'm going to talk about uns. I think it's important. What we've seen this morning and yesterday is a number of applications, a number of infrastructure, of how to use a uns. As Jeff said, we built a number of uns, as we've been building uns is for the last five years. So I'm going to talk about some of our experience. Because I think a lot of you, whether you're system integrators or end users, you're going to need to start building a uns, and you need to know where to go. So that's where I'm going to focus, and then I'll turn it over to Jeff and Aaron to show some awesome demos to get started. These are a few of our customers. These are the ones that have gone public. It's always hard as a vendor to get companies to go public saying that they use your software, but these are some of the ones that have and in fact, a number of these, there are videos on LinkedIn or on YouTube that you can find that talk about how they're using our software. The reality is, as Jeff had said, we've got 80, over 80 customers, over 400 sites across a really wide range of manufacturing and industrial companies. So all these companies have deployed it. They've implemented uns. And that's what we want to show here, is, how do you build the UNs? When we talk to companies about building a uns, though, it's not about the UNs. Okay, you don't want to have a project that is, let's have a uns. You have to start with business problems. I think Walker talked about that earlier or yesterday. You need to start with the business problems. We're here to solve business problems. We're here to drive manufacturing. So it's all about making better decisions faster. And a lot of the times, what we see in terms of use cases is, well, yes, we need to get operations better access to data, but we also want to get quality. We want to get maintenance, we want to get production, we want to get supply chain. We need design engineering. They all want industrial data, and today they get it in reports that are two weeks old, in a good in a good day. In many cases, they don't get access to good data. So how do we get them access to all the data in the factory when they need it the way they need it, and that's what uns brings us. But what we find is companies have tons of projects. Project coming up with projects is not the problem. The problem is there's a bottleneck, and that bottleneck is actually the OT and IT teams, because the approaches that most people are taking for integrating systems is generally point to point, and it doesn't scale and it's impossible to maintain. And the teams, the OT and IT teams, spend more time maintaining existing connections, existing equipment, existing systems, than they are deploying new stuff. So how do we make them faster? And that's really the power of the UNs making the OT and IT teams faster so it can deploy projects faster at greater scale, is where we see the most value, and not only deploy them, but maintain them over time. Because what people are seeing right now is that as they add more projects, because everyone's doing industry Ford, auto or digital transformation projects, as they add more, all they're doing is spending time maintaining because at the end of the day, the needs for data keep changing, and your factory does change too. So you may have a new product, you may want to refine the way that you've implemented your automation in one area of the factory. All of a sudden, all of your data integrations break. We need better tools. We need better approaches, and the UNs is the most powerful way of doing that. So when we engage with customers, we always say you need to start with what is a uns. And the UNs is all about democratizing your data. It's about being able to build data sets that are based on semantic models that they talked about that last night. It's about being able to organize your data so that you can easily find it, and it's about being able to build it for the consumer of the data. So I did not talk about MQ, tt, I did not talk about a topic name space. I talked about making data available, and that's what uns is really all about. So as you take on building a uns, start out by looking at, okay, who's going to consume this data? What systems are going to consume it? Is it ignition? Is it tulip? Is it maintain x? Is it Grafana? Is it snowflake? What are the applications? How are we using it? Are we dashboarding? Are we using AI? What are the models? What are the different structures that we need to collect? What are the sources? It's not just about telemetry. A lot of people start with telemetry, and Telemetry is a key source of data in a factory, but there's a heck of a lot more data out there. There are a lot of other data systems on the factory floor, quality maintenance systems that are. Ever connected to industrial automation. And then there's a lot of in this stack. We talk about this stack, and there's four layers. And as you walk up this stack, you've got one mes block and you've got one ERP block and you've got one skater block. But every factory that I go into has multiple mes, multiple scatters and HMIs, and sometimes even multiple ERPs across the enterprise. So how do we integrate those systems? And those are the sort of things that you need to consider. And then, how frequently are we moving the data? So what does this look like? Generally, as I said, a lot of us here on the OT side. So we start out putting an intelligence hub in the OT and we're collecting data, whether it's telemetry data or imagery or files or SCADA data, whatever it is we're pulling that data together now, often there's a DMZ, and people put a broker in the DMZ because they want to provide access to that data to people outside of the OT department, also inside the OT department, and that becomes that convenient way of doing it, as we build it out, we're often contextualizing data. And it turns out a lot of the context for data comes from your IT systems. So when you think about contextualizing telemetry data for production, you need data from the MES system. You need to know what the batch number is. You need to know what the batch is what the you know, what the product is, that kind of thing. If you're contextualizing for maintenance, you need to get data out of the CMMS that's often in the IT network, and so we need to span those. We need to contextualize that data with intelligence help we can so you can have your uns for OT, you can also have a uns on the business side that uns may be MQTT, it could also just be an API. You can get API access to all of these different systems by having the API access through intelligence hub. But we also hear a lot of people saying, Look, I want a uns, but My Company Standard is moving data to snowflake, or to Amazon or to Microsoft. So how do we do that? And what I say to them is, look, you want to unify the name space of snowflake, call that your uns build apps on that. You're democratizing data. You're providing data to the people. It's real time data. That's a uns. You're good. Just keep on going with it. So, you know, we see a lot of different patterns, and we see these multiple patterns in the same customer. Sometimes it doesn't have to be one pattern. It doesn't have to be all the data in one big, massive mptt broker, which is a very common pattern. The UNs that Walker has built is a single broker. It's an enterprise. But we also see other patterns, and I just want to make people aware of those. So ultimately, hi byte provides a software solution called the hi byte intelligence hub, and I'm going to turn it over to Jeff to start showing that off.
Speaker 2 7:56
Hello, everyone excited to be here today. Let's go ahead and get started. So this is the intelligence hub. Think of this as kind of a your cockpit for engineering, industrial data, right? It's designed for cross functional teams, so you could have data constituents participate in here. You could have data architects, people who are involved in modeling, but maybe in people who are system owners, right? Who are set up, who are tasked with securely connecting to different source and target systems and such. So when I saw the survey information for the before the show, I saw that most of the attendees were kind of on the beginner to intermediate level, but there's certainly a sprinkle of advanced users as well. So I decided to kind of break this presentation into kind of a journey of a beginner intermediate and in more advanced tiers of patterns of use cases that we see. So at the beginner level, I think the most, the most common characteristics that I would observe is generally the people kind of come in and they say, Hey, I want to connect to I want to have a uns, what's your use case, right? And then there's kind of this cricket sound, right? Well, have you looked at your data before, there's kind of more cricket sounds right? And I think it's difficult, if you haven't seen something before, to kind of envision what you want to do with it. So I think sometimes it is somewhat helpful to just first connect to data and aggregate it in one place. So this smart factory has an OPC UA connection on there. I believe when I looked at, actually at the machines, they had Siemens, symotion devices, they actually speak of interesting variant of OPC classic called XML dA. So in practice, you'd probably be connecting these on here. What's very nice about the intelligence hub, right? Is I can go ahead and browse root objects, right? I could simply select something in whole and bring that as a complex object. So if I go here, if I go to this, press 103, here, I'm basically browsing a node in the namespace. I'm traversing four levels of depth, but essentially, I've taken all this, this entire namespace, into a giant, complex object, right? I think this is shifted around, so there's probably some nodes missing here. What's quite nice about this, right? Is if I go to my. Inputs here, and I go to this complex object, right? This is now a referenceable object that I could essentially pull in anywhere and essentially step through. So I'll go ahead and get started. We can just start sending some data through. So this is our pipelines environment. This is essentially the center of motion in the software, right? So I could go ahead and read this object here, I'll just go and turn it on, flow it through. And what's quite nice is I can actually I have full observability while I'm developing this, or I could also play back all these, these events. So I'm gonna go and enter debug mode here. I'm gonna go ahead and do a little read, hit, run, and we should hopefully see our data flowing through here, right? We could sequentially step through it. I think there's some missing data on there, because I think they're indexing them based off of numerics. So they're shifting around. They need to be keyed, which is lovely. So it's always fun to pray to the demo gods. So what I'm going to do now is head over to this uns client here, and I'm gonna turn on some more data. So the UNs client is a place to visualize the the the MQ, TD namespace essentially here. And so I can look at things. It automatically decodes spark plug, protobuf, everything. So now I have some data flowing in here. Someone just starts turning on some pipelines. So go over here. I'm just going to turn all these guys on. And so this is just more of a futile exercise to basically just say, what does our data look like? Because we actually find that there's a lot of users who have an idea of what they saw on an HMI when they actually look at what's behind the scenes in their control systems, doesn't actually look like what they think it is, or they think they want to connect to SAP for the first time, they don't realize what they want to get done actually requires four sequential calls of integration and such. So I think sometimes just simply, just download the software, getting your data in one in one place, and first understanding where things are is certainly a helpful exercise to simply just get started. So kind of what we did here is we simply just grabbed telemetry data. We were able to move it in bulk. We're able to even break it up into separate objects, flatten the structures. We had individual named value pairs. So if you're a beginner, you don't know how to work with data structures. You just want to trend things, discover what your data is. This is a good place to start. Quickly. You'll find as you, as you jump into this, you'll kind of, you'll have some, I'll call it like bruises with data. Basically, you'll have some kind of negative experiences. Initially, you'll start realizing that there's all these criteria that's really, really important to kind of understand. What does it take to truly be interoperable? I don't want to kind of wear people down, but I want to say, say here essentially is that as each as you start beginning your journey, you might find that you have to take a step back, and then you'll start taking three steps forward, and you'll kind of be iterating that each step along the way. So these are kind of key things that I always like to think about. Is integration is not so much about protocol conversion. A lot of people think it's, Hey, I just have to adapt protocols, and everything should be fine. What you'll actually find, though, is that 95% of your problem has to do with payload. So think less about protocol. Think more about payload. So I'm going to jump into a more scenario now. We're going to actually figure out how to model data, right? So this particular factory has printing systems and such. So essentially, it's, you kind of have to have a source role and you have a processed role, right? And, like a lot of mes use cases, typically you want to kind of transact production activity, if you will. So you might want to know, hey, what is the source? What does this come from? What kind of inventory and how long have I ran it? Right? I think if you took that kind of first example approach that I just showed there, you know, user would probably just say, Well, I don't want, I want to measure the speed, I want to measure the temperature of my machine, right? But I think if you look beyond the domain of just like process control, you're gonna have a line of business users, plant managers, process engineers, that are gonna come on and know what has happened to my manufacturing process. So we're now kind of the point of saying, I don't just need asset telemetry. I actually need to model my manufacturing process, I need to turn different activity into unique transactions that I can aggregate. Right? If you can't do this, it makes it very difficult to do things like OEE or or other other types of things. So this is sort of foundational to doing, to doing a lot of manufacturing use cases. So I'm gonna go and hop in here to our modeling environment. Let me just switch over here. Go to this guy. So here I've built
Speaker 2 14:07
sure this is the right one. Go to number two. There we go. So essentially, I've modeled here. So essentially there's a source information, and I probably want to know, hey, where is this coming from, and where is the data going to and what am I consuming each step? And then we'll kind of say the difference of that will derive that as waste, right? We're not, we're not affirming the waste here. We're basically just assuming some sort of loss or jam along the way. So we're trying to basically model this physical process that we have here. So you can see here, I've built out some attributes in place, and so you can see this source inventory. What I really want to get a point across here is, if you look at this uns initially, you can definitely tell Walker and team spent time already coalescing their transactional and telemetry data together to be able to form these payloads. But there's sort of this chicken and egg problem. When you first get started, you might realize I get demand signals from my customer, whether EDI or customer order. I allocate those to work centers. I start just turn. What machines are available and what are not, and you start finding you're having this, like circular effect of your MES system is pointing to your automation, your mes, your automation is pointing to your mes, and you kind of have this chicken effect. So it's very hard to kind of tie these different data sources together. So what I wanted to basically say is, like, this could look like the shape of what an mes API would want, right? So essentially, what I'm doing here is I'm taking transactional data. So for example, my serial number is probably not going to be inside of my PLCs, right? I may actually need to connect to an MES system, right? But however, it might be useful for me to use master data to essentially evaluate that, or I might have telemetry data right that, so I might want to cross reference values and such. So when I do a simple test read here, you can see that I've built this all out. If you look in the MES database here, here, I have a table that is basically a query of what was in the MES database. There's some master data already in place. I often say that industrial data is inconsistent, but it's consistently inconsistent, so meaning the machines that you bought in 2015 kind of look this way. The machines you bought in 2012 kind of look this way. There's a pattern to the site. So you're never going to have this full uniform effect, but it's certainly useful to basically take this bottoms up approach, because if you do that, you might find you might get 60% the way there, or 70% the way there, and that can give you incredible scale. So lean on those sorts of truths. So essentially, what I've done here is I've taken a query of this master data, and I'm essentially injecting them as configuration into the intelligence hub, and those are going to drive the pipelines. So if I go to a connection here, I'm going to go to oh, here I'll just show that I just had a query, essentially. But essentially, if you look at this payload here, I have a complex payload, right? And it's an array. So this first array basically represents press 103 I made this payload especially verbose so everyone can understand that they see press 103 they see press 104 and press 105 so what's nice about here is this simple payload that would be to basically to record production or essentially transact things, to basically determine what our production activity is. We had to tie together multiple data sources, right? We had to sequentially acquire work order IDs. We had to figure out what are serial numbers and such, right? So you can think about just doing that for one machine is quite a bit of work. So what's nice about is, when you have that master data, you can essentially pass that in, almost as an argument, into your kind of data infrastructure, and suddenly you're having tremendous scale very, very quickly. So this is a great way to kind of get started with with manufacturing use cases. The next scenario is, let's turn up the heat a bit, and let's suppose that we have something related to quality or, let's say, customer service. So one thing that I like to be kind of contrarian here is I often find that a lot of industrial data is incredibly valuable, and it's often stuck in file systems. Does anyone have any data stuck in file systems? I'm seeing some hands. So it's sort of interesting, like I've seen before, when you're launching a production line and such, it may not matter for you to get to capacity for 18 months, but the data coming out of your Metrology lab is actually more important to your bottom line that's going to determine sooner, how you're going to get a capable and economical process sooner than later, right? So we're going to assume here that there was some sort of inspection solution on there, because I didn't see any sort of quality in this particular manufacturer. And this particular quality system produces a few different flavors of data. There's unstructured data, there's image files, so let's assume that's like a line scan camera. We also have some semi structured data. So think of this is like raw data, right? That we can't send over a field bus. We're not going to send this stuff over spark plug metrics each each time that we scan a segment, it produces like a 50 Meg text file that's probably not feasible to send every second or and such, right? So you might want to send that data in bulk. You don't want to stream those on a real time basis. But there's also some transactional data and some other data along the way here. So I've built a pipeline here how to get access to that data. So I'm going to go to pipelines here. This is a great place to do this type of work. So I have this inspection pipeline here. I'm going to go and turn this on. So let me just turn on the observability here. Let's go ahead and get this thing running. So essentially, what we have here is we're checking into the machine. I'm doing this very, very infrequently. I probably want to turn this up a little bit. There we go, unused while debugging, praying to the demo gods. There we go. Okay, now this is working up. So the first thing that I've essentially, I've done here is I've gone ahead and checked in and said, Okay, what's the context? What is the machine actively producing? Right now? What's the work order, what's the serial number and so forth. And what I'm then going to do is set this to metadata, as you can see at this payload down here. So I've kind of moved this along the place I have two separate data streams on here. One is for the text data, one is for the for the unstructured data, right? So what I'm doing is I'm then going ahead and reading into a a file system, browsing a file directory here that turns on, there he goes. So what I'm essentially doing is I'm trying to figure out where, what's happened these file systems, right? One of the things that's very hard about file systems is that they're not an event or. In protocol. So you might actually have to kind of determine what is the event change, right? Is the file size growing? Did you add a new file to the directory? Is it continuously being appended? So essentially, what I'm doing is I'm monitoring this directory here, and I'm using some logic to say, Hey, I'm going to identify what a file is. Then I'm reading in this file here. It's going to just basically become a base 64 string and such. And then I'm writing this to to s3 right? The nice thing about using the cloud for object storage is that if you have an infrequent access pattern, it actually be very economical to store long term things on there. Additionally, alongside this, I'm also grabbing, uh, some measurement data, and we're going to pretend this machine builder is absolutely horrible. They gave us, uh, text files that have no headers in there. They're semi colon delimited, and they're written in an two byte, two byte characters with no end, with no byte order mark. So essentially, when people look at this file, they don't know what they recognize, and they're seeing it's a proprietary file. So here, I'm using the CSV parser stage that we have here, and we can quickly engineer and basically turn this payload very, very cryptic. Get a nice, usable payload, and we can do some simple modeling on there. We can shape the data in line. Then we could publish this to our QMS as well as we can enrich our context. So if I look at my database that I have here, I have a simple report here that I produced here. So now you have this nice, wide report. So imagine if you're a quality person trying to get through an audit, or even if your customer service right, you have a nice contextual data where you have all the kind of context about what things came from. You have a pointer to where the raw image file is. You have all the kind of measurements. So think of this as almost a certificate of analysis of everything that you produce. This isn't this is incredible, because imagine if your peers as a manufacturer are faxing your customer like things that look like they're printed on a printer from the 1980s right? Meanwhile, you're able to build a customer portal that has a complete genealogy of everything. You ship them, you probably very look. You look very good from a sourcing perspective. So the next thing that I want to quickly jump to is just talking about uns, as John mentioned, there's some other emerging patterns and such to talk about. Some people are publishing the stuff. They're very focusing on, MQTT. There's also another approach, right? You can design a U N s, and you can project it into multiple systems that might sound a little bit cryptic. I'll jump into this in just a second. So in this particular user scenario, we have many different users, many different applications and services that they need to get work done. How do I handle accessing historical data? How do I handle all these different solutions? Right? So I'm going to jump into the intelligence hub here. I'm going to hop into namespaces. So namespace is essentially the hierarchy modeling construct within intelligence hub. This is not just a tag tree, like in your favorite ot tools that you've been using for decades. Within this name space here. It consists of nodes. All of these nodes are smart. These nodes are aware of themselves and the underlying data structures they bind to, but they're also aware of other nodes around them, right? And behind the scenes, this is rigorously declared as a JSON document. It's based off of a technology called JSON schema, and essentially what that allows it to do is we can index through that document, we can query through that document, and most importantly, we can also validate those data structures. So if I'd go ahead and do a test read here with smart query, I can basically have a giant JSON document of my entire factory. And this might seem sort of OK, that's kind of cute. Who's going to ever consume all the data at once? But the idea is that I can now query this. I can go to different spots on here, I could say, give me certain types of machines made by certain brand, or I want to look at my pneumatic systems made by SMC and not Festo as an example, right? So I can basically create queries here. So essentially, what we've done here is, on the left hand side is we've designed or governed the UNs on the right hand side, we've queried the UNs. Now the next step is, what do we do with it? It's kind of interesting to look at this by ourselves, but we actually want to put this to use, to application, to where our business users want to use data. So I'm going to hop up to a pipeline here, and I'm going to go to this guy here, and here I have a massive, pretty, gnarly pipeline happening here, so once, so periodically, I'm basically checking on these smart queries. So I have one here where I'm doing a smart query, where I'm basically just asking for everything. I could easily query across hierarchies, because that's what big problem that some people see, is they have access patterns where they want to say, just give me the pumps by line. The problem with a lot of systems is they have to pull all the data and then parse out what you want, because the nodes are smart, we can extract any data that we need. So what's interesting about this is I'm doing some mild just massaging the data, mostly just figuring out how do different systems route that. What's very cool about this is in one single pipeline, I'm sending data in parallel to hive, MQ, I'm sending it to snowflake, I'm sending it to pi, and I'm sending it to ignition, and it all has consistent semantic hierarchy. All those systems have very different approaches to modeling data in snowflake, because it's inherently tabular, it's going to be columns in ignition, it has a tag provider right with folders and udts, the construct that people often use with MQTT, they often use top. Picks right, delimited by slashes and such. So just to prove that I'm not lying here, essentially, we'll go ahead and turn this guy on here. We'll look at our uns client. I think this was example three, so you can basically see here they should hopefully resemble the hierarchy that we saw earlier. Press 103, and here's what's our different name spaces. I also added some additional namespaces here because I kind of felt like, hey, there's not we're missing supply chain data. We're missing data about how our operators are being trained. So it's very important that you kind of have a holistic view, because whatever might impact your production process is pretty important. So if I walk through here, what I've essentially what I've projected this. I have an ignition designer running up here. And so if you look at here, all this is perfectly projected inside of there, I created all the udts, all the structures, and I've nested these quite deeply, almost, almost to an obnoxious level, if you ask me. But you can see here, all the structures are here, and you can see the additional assets I'm gonna go one more into pi here. Also forgot to mention, as I was publishing those images into s3 for our customers to be able to look at, I believe this customer actually produces, like Doritos bags. So you can basically see here, I did, in fact, get the images up to the cloud, and they're in a customer portal. So I'm gonna go hop into let me just fire up good old pie here. Sorry I'm missing a There we go. Sorry I'm missing this one second.
Speaker 2 26:30
Connection Center. There we go. Fire up the good old pi VM. So to prove that I'm not lying, let me pull this guy up. I think someone kicked me out. That's great. Gotta love coworkers, right? So you can see here, here, I've consistent semantic hierarchy enterprise Dallas site, all the all the machines, and you can see all the data that came in through so even though ignition and, let's say Aviva PI system have dramatically different APIs, semantically consistent, right? And if I go ahead and hop into snowflake as an example, sorry, I'm losing track of all my windows here, and I find this guy. Here we go. There's chrome you can see in snowflake. I have all the hierarchy, right? That's directly in there. It's directly as a cell so I can query by it. So to quickly sort of wrap things up, because I know that we're sort of at time, my windows are going absolutely crazy. The whole Pruvit thing. Intelligence hub is a 17 5k annual subscription. Very, very generous. You can install it as many times as you want, prod, Dev, QA, do it by network segment and such. I spent probably about two weeks on this. Been very, very busy lately, but I'd say, for someone who's not doing this all the time, I would typically say an intermediate would probably be about a one to three month thing. Honestly, most of the work was basically stealing with simulated data sets and all the external systems building the engineering, engineering the data pipelines, was pretty straightforward. But to really boil things down, the problem space is essentially transforming data into information and engineering industrial data. When I think of subscription solutions, I always, kind of always being on the buy side in the past, I always think of three different pillars. I think one solving problems today, which we kind of just essentially proved today, but they also the ability to continue to solve problems, right? Are we getting software patches? Are we able to upgrade without breaking changes? But one of the most important things that I always look for at a software subscription is is the vendor innovating? I always felt like when a vendor stops innovating, they should lower the price I'm paying you for R and D, and that's a really, really important thing, because I want to be able to solve tomorrow's problems, and I want to solve tomorrow's problems better, or solve my existing problems in a new way. With that, I'm going to pass it on to
Unknown Speaker 28:49
Aaron. All right,
Speaker 3 28:50
I got one minute to run this thing off the rails, and then we're out of here. So I'm a CTO, so I'm all AI. Now that's all I do. Stop by the booth to see this. But I have an AI use case where what Jeff had shown you is I tap into the UNs and I model a single roller info. So I go and map all the data points to an instance in our world. Yeah. And what I'm using is I'm saying, Hey, I've mapped this data. Think of it as an OP server. I've gone down the complex mapping of OPC tags to this model thing I want. Here's the rest of my address space. Go find stuff that looks like this and generate it. And this is the I won't be able to run it. This is the result I pass it this instance. And then as I click through here, this is what the AI found, is other stuff in the in the namespace. And as I click through theory, it's going to show me each instance and what it actually mapped to. So what you're seeing on the right is the namespace at prove it. What you're seeing on the left is our models. And once it's done that, if it looks good, I can import it, I could jump to it. And now I've got a model thing in high byte that I can then move to snowflake wherever. So I can show that a little bit more in the booth. I can show it with an OPC server that has some complex tag names. And you can kind of see it's not perfect. It trips up every now and then, but it's, I'm pretty impressed with it. So you'll see that in the product in the future, to help you contextualize that data. It is a challenge to go through your systems and be like, Okay, we gotta put context around this. How do we do it faster? Stop by the booth.
Jeff Winter 30:14
One minute. That was impressive. Say, I All right, we're going to switch to the Q and A, yes, you can get that up there to let people load some questions. I'm going to start with the same question we're asking everyone, which this was a great demonstration, what is the next best alternative, the anchor point of what people would do if they didn't use what you just showed and then why yours is better than that alternative, not about products out there, but about ways of doing it,
Speaker 1 30:42
I think what's going on is every application has a way of getting data into it, and so what most people do is they'll connect application, application, application, and that really comes out of our heritage with the Purdue model. It was go from SCADA to me, he has TRP, and that was it. Now that we have so many applications, the challenge is that there's too many connections the network. Like the math on the network theory, is n minus one squared. So if you've got 10 connections, that's nine squared that's 81 total connections between those systems, you can't handle and manage that. And that's really the benefit of utilizing hubs, aggregation hubs like this, and being able to accurately and quickly integrate data. It's great that everyone has an API, or everyone can support MQ, tt, or can support something else, but we need to standardize and contextualize that data, and we need to curate it for the target system, not the way that the source system has it set up. So typically it would just be a bunch of coding for each of those integration points. So we're just consolidating that into a single application that's, as you saw, no code.
Jeff Winter 31:57
Awesome. All right, let's get to some of the questions here. Why? Hi, bite over litmus seems like a lot of overlapping functionality. You
Speaker 1 32:06
know, it's funny. Actually, I thought that Vatsal did a good job of answering this question earlier, and I'm surprised that there's only two up votes, because this seems to be a trend in our conversations this week. But the way that I see it is that litmus is really a full stack solution. It's what I would call an IoT platform. They do connectivity, they bring the data up, they store it, they analyze it, visualize it. We don't do we don't do device connectivity, we don't do analytics, we don't store the data, and we don't visualize the data. What we do is we took a horizontal view. So VAUDE and the limits team took a very vertical view. We took a horizontal view because what we're seeing in industry is, if people are using lots of applications, you see them all out in the in the the hall, it's also, you know, the cloud we saw the snowflake presentation AI, any vendor that you use in is producing AI today, the chances are it's going to be a different vendor or a different model in two years, maybe even six months or two months. So, you know, things are changing rapidly. We took that horizontal approach so that you can rapidly move across all of your data and rapidly deploy it wherever you need to deploy it. And it gives you the ultimate flexibility. So if you want a full stack solution from one vendor, litmus is your solution. If you want the ability to use a lot of different applications for a lot of different needs, for your data, for your industrial data, then hibuat is the solution. We should record
Jeff Winter 33:40
that, because you're probably going to be asked it four or five more times in the next hour. I've already been asked it five times today, all right, and it did get 16 up votes by the time they refresh. So next one. How would I convince my company to use Hi byte when we're all keeps moving when we're already have ignition? Yeah,
Speaker 3 33:58
I can take that. Go ahead, go use it. So I bet you can download for a two hour trial, just like ignition. You can try it out. You don't need any approval to do that. You don't need to talk to anyone from sales. You can try it, prove the value and then show and the challenge is, we scale across multiple sites. If you only have one site and you only have ignition and ignition, does your data ops, awesome. We love ignition. Ignition is a great platform. But if you jump to your other site and they've got wonder where they've got some other status. Other SCADA system, and you try to bring ignition in there to do data ops, now you're in a scati war. Probably not the best design, right? And so we're designed. We don't, if you look at our tech stack, yeah, we don't have any drivers. No, we don't have visualization. No, we don't have historians. We're hyper focused on industrial data ops in this problem of providing context at the edge and then sending that to any source. So it's a very different problem set that we're focused on, and we're optimized for that scale.
Jeff Winter 34:47
Like that answer. Well, speaking of which, you mentioned 18k per site. So when you do all these sites, what does site mean?
Unknown Speaker 34:53
He said, So,
Speaker 2 34:57
site is just essentially a four. Walls, basically a manufacturing facility. Essentially, we do find, though, that there are some distributed architectures. We have been heavily deployed, increasingly in logistics and energy and such, and we have some alternative pricing models for that. But generally, our view is we do not want people to we don't want friction to using a solution. We want essentially unlimited functionality, unlimited throughput. You can install it as much as you want, as little red tape as possible to solve your interoperability problems. That's what we're razor focused on. Some
Jeff Winter 35:27
of these are kind of funny, so we're gonna have fun with them, but let's get to the real ones. Does Hi byte only have connectors for industrial data, or can it be used to integrate disparate sources?
Speaker 1 35:38
So we're really focused in industrial as you had introduced us. I came from Keppra. Aaron came from Keppra. Jeff came from the auto industry. We understand industrial data. Could you use it to integrate your sales force and your marketing system and some cloud data? Yeah, you probably could. What we've focused on, though, is solving the problems of industrial data, where you have lots of different machines, you have lots of different systems, the data is coming at you in real time. You need to correlate. You need to correlate, you need to contextualize that data. You need to land it in the cloud. It's many times it's not sourced in the cloud. Sometimes we're pulling it out of the cloud and landing it in on prem systems. And so we've really focused on that. But, you know, the sky's the limit on how you can use it, but our team is constantly adding capabilities around the industrial use cases. Okay, this
Jeff Winter 36:32
one's interesting. What did the solution actually solve for in the virtual factory? I'm gonna admit I struggle with that one to being but I thought you answered at the end, and so you can better answer it than I can.
Speaker 2 36:42
Yeah, so this is always interesting, because initially demoing a solution, because we're pure play middleware, we're not a dashboard or an OEE screen that's definitely tied to an outcome. I think a lot of these problems make a lot of sense, if you have the scars to prove it, if you've tried doing this before with alternative approaches, it's incredibly obvious. But essentially what we first solved on was, how do I integrate with Nmes, or how do I do production based analytics? The other scenario was, how do I do product traceability scenarios? It is in. It may not be as relevant for the for this packaging manufacturer, but for many other industries, if you don't transact or log your product as you're producing it, is almost better that you didn't produce at all. The concept of store and forward of questionable medicine is not going to is that is not, it's not a feasible thing, right? So essentially, we did traceability as well as we were able to provide interoperability to multiple nodes from a single set of pipelines. So everything that I basically built there if suddenly something came online and I said I wanted to add another machine, or added a new another sensor that's on there. I don't have to undo any of that work. I can add that and that will propagate through seamlessly to all those nodes. So essentially, we're integrating more with less and driving down that cost of change. So every time you add a new line, you add a new cell, you replace out applications, you're you acquire new companies, right? That cost of change, of integration, is close to zero as possible.
Jeff Winter 38:01
So based off our time, if that didn't satisfy the answer, you can re put it in and upvote it. But I think it did for me. So we
Speaker 1 38:08
just addressed the life sciences one. Yeah, go for it. And I just wanted to say that life sciences is actually, I believe, our largest industry vertical both pharmaceutical as well as med device. So yes, we are working with a number of them. Projects are fairly early, but we're excited to bring them into the digital, digital age. And I actually think that a lot of them are are advanced. They just have extremely complex environments, and they're very large companies. Thank you. I'm
Jeff Winter 38:40
curious on the email inputs versus outputs. Question mark,
Speaker 3 38:49
yeah, we don't do that today. Oftentimes we're in big enterprise environments, right? So they have ServiceNow, or they have other systems where you would push notification to generate those kind of emergency response things. So we tie into those existing systems, versus send direct from Hi byte, or
Speaker 1 39:04
you just use the API. Yeah, right. I mean, teams has an API. We can connect in with that. We have customers that have done that too. I'm
Jeff Winter 39:10
just gonna heads up. I'm gonna ask that, what do you not do? Well last but I just so it's there. But pipelines versus node red,
Speaker 2 39:17
I would say pipelines are more of a microphone. Can you hear me? Gosh, shut off.
Speaker 3 39:24
They cut them. I'll take it. So we decided not to use Node red. That's we decided to roll our own transformation. So ETL tool. There's a few reasons we did that. When you look at a lot of node red projects, it's very low level, right? Node red is really good at communicating out to devices. It can do higher level stuff too, but it can quickly turn into like a Venn diagram of nodes to accomplish anything. The other things we wanted in there that Jeff started to show is you can debug a pipeline. You can step by step, see what the transformation was without adding in logging stages. We can do pipeline replay. So in a live environment, if you have a failure, you can come back in and see the exact event that entered the pipeline produced a failure and why. So. Kind of advanced features are not something we're going to get out of a node red, and we feel like the transformation piece of ETL at the edge, the edge, is so important that we need to invest in that technology ourselves.
Jeff Winter 40:10
All right, so we'll go through the last three can, hi byte, ingest Excel files and transform that into organized data. Why?
Speaker 3 40:18
Yes, yeah, we can. We can do that. And customers do do it. Not sure
Jeff Winter 40:24
what is meant by thoughts on hive MQ pulse, because it was just released, unless you guys knew about that. We're excited to learn more. Yeah, I don't know how to have ton of mutual customers.
Speaker 3 40:34
We're excited how we can tie into that and help and help them out. And
Jeff Winter 40:39
so the last one, let's have fun with what do you not do? Well, I'm going to be curious as a company, and then for fun, each of the three of you.
Speaker 3 40:47
Well, that intro made me super uncomfortable. I think Tory wrote that. We didn't write you did a great job. Tory, our marketing person wrote that. And I don't know if that's not something we don't do well, but John, more serious answer,
Jeff Winter 41:02
one for Hi biting, one for each of the three as a
Speaker 1 41:05
company, what do we not do? Well? Well, we don't have drivers, connectivity drivers. We don't have a unique driver to connect to Siemens or Rockwell devices or Mitsubishi devices. We get down to the OPC layer, we get down to MQTT, but we kind of stop there.
Speaker 3 41:27
Our current challenge is to get the product in the hands of more people. I think a lot of times we hear like, Hey, we're not quite ready for high bite yet. Like, we know we're gonna need it, but we're not at the point. So our challenge is like, how what do we need to do? And that's product wide, that sales marketing, like, what can we do to get get it in the hands of more people? Because we think it's tremendously useful, and those are conversations we're having. Yeah,
Speaker 1 41:47
and by the way, you can just go to our website, download it, YouTube. It's got a two hour demo. You just reset it each time, and
Jeff Winter 41:56
feel free to use it. And from a personal note, I know it's karaoke for you.
Speaker 1 42:01
That was a setup. But, you know, I'm happy to, happy to play along. If anyone wants to have some karaoke fun, we got microphones, we get a screen, we can just start singing and journey. You know, I'm more of an 80s guy, so take your pick.
Speaker 2 42:19
I'd say some ot users who may have focused on process control and high speed field busses their entire life, they might go in and they might say, is this as fast as I want? We often, we're often focusing on a broader domain where, meaning that field bus ends up the factory very quickly that might be going to the cloud on there. So for example, like, if we're moving data fast, we'd rather just build a giant structure of data and move that in bulk. So work smart, not hard. For people who want to work hard on their whatever, that's probably an area that we generally say maybe stick to your traditional industrial automation vendors. We're more about trying to coalesce all the different kind of data constituents you have in your organization. So not just focusing on a narrow process control focus.
Jeff Winter 43:02
I like it, and at a personal level,
Unknown Speaker 43:06
sleeping is this true?
Jeff Winter 43:09
All right, let's give everyone an up here, a round of applause. Thanks everybody. Thank you. Applause.