Speaker 1 0:00
We want to create our digital infrastructure, and we want to plug in all these amazing tools into it, right? That's what we're seeing here, right? So it's actually great that people are asking us questions, because they're trying to figure out how they're going to use all these solutions
Arlen Nipper 0:11
well, and this show has been great in bringing all this up to the top. You
Zack Scriven 0:29
Hey, everyone, welcome. I'm here at the Pruvit conference. We're doing a Pruvit podcast session. I'm here with Travis Cox and Arlen nipper. They just got off the main stage with their Pruvit session. So Travis Harlan, welcome. Thank you for being on the podcast. Thanks so much for having us. So why don't we just dive right into it and, uh, kind of give us. How'd you how do you think your session went?
Unknown Speaker 0:49
I think it went really well. Oh yeah, how about you? ARLEN,
Arlen Nipper 0:51
it was awesome. Travis did an awesome job.
Speaker 1 0:54
I try to pack as much as I can in 30 minutes, right? Yeah, really kind of show the whole journey and and really just what it would be like if they were to do that.
Zack Scriven 1:01
Okay, so walk us through what you did, just so if someone hasn't seen it yet, like, Give us an idea of what is you walk through from, like, edge to cloud.
Speaker 1 1:09
Yeah. So you know, we know a lot of these systems are largely on premise right to be able to solve the challenges that are there. So we started out at the edge, because if you you know, ignition historically has been unlimited and and you typically put one server out of plants, and that can connect to all the controllers out there. It can bring data to build applications. But if that server is in the IT room and you lose connectivity, you know, to a particular PLC in a building or whatever, you would lose data, lose visibility, right? That's what HMIs are important for on the plant floor. And so we started with ignition edge, where we can connect to the the actual PLCs, controllers, OPC servers, what it is, whatever it is there, so we could bring the data in. We can build data models off of that and that, we can store and forward that data into the US, into using MPT spark plug at the same time, of course, we can have a local HMI where we can, if you know, if they ever lose visibility, they can go that machine, they can still control it. So we start at the edge, and we're able to get the data from there, publish it into the MQP server, where then we can have our central plant solution. And that would be where we would aggregate all the data from all different devices into one place and build out an application and be able to deliver that application anybody on their phones, tablets, panel PCs, smart TVs, what you name it, right, wherever they want to see that data. And then ultimately, from there, you a lot of customers are going to store data historically on premise. It might be for a couple years, right? It's not going to be your long term approach. So leveraging the cloud, leveraging databases like snowflake and others could be really important. So we wanted to show a hybrid solution going from from the plant to the cloud, where we can leverage snowflake. We showed our IoT, IoT bridge.
Zack Scriven 2:50
So walk us through, once you're getting that into the cloud, I know you're using ignition cloud, but and the Cirrus link IoT bridge module to get it into snowflake. So walk us through how that data moves from. And also like how the edge, I want to really focus on how the UDT, you know, defined at the edge, goes all the way up the stack where it's defined in one place. Well
Arlen Nipper 3:10
the So, the really cool thing is, within ignition, you define your UDT, you give it all the contextual data, engineering, high engineering, low engineering units, dead bands, scaling factors at linear is at square root, all of that contextual information right there, at the equipment. So you know, subject matter expert, you put that right there now, from then on, that basically is a single source of truth. So the UDT and the instances of UDT are all published to the ignition gateway. So now I've got those at the plant level. I could have multiple ones of those. And now if I go to the cloud, immediately, I've got all my udts over spark plug B instantiated, but those were defined all the way at the edge. Yeah. So,
Speaker 1 4:03
yeah, go ahead, yeah. So specifically, we're using, you know, MQT spark plug and spark plug
Zack Scriven 4:08
to publish payload and structure. Yes, exactly. Yeah. Spark
Speaker 1 4:12
plug has in it the ability to define, on the birth message, all the data models that we have, all the UDT definitions that are there and and on the burst, it also publishes all the instances of those definitions, and that's going to going to let you know that's going to go into the broker and let any application understand what that edge node is going to publish, right? And so that allows us to define it one place, you know, on the end, on the central plant solution with NPT engine, it automatically discovers those and has the UTS just built for
Zack Scriven 4:44
you, right? And then if you change it at the edge, it will rebuild it.
Arlen Nipper 4:49
So
Speaker 1 4:50
by default, well, the way we have with transmission engine right now is that if you were to go and change the definition the UD at the edge, we don't. Place
Zack Scriven 5:00
it on it'll create a new, a new definition of it
Speaker 1 5:03
doesn't know. It doesn't do anything by default, okay? Because you got to be a little bit careful about, you know, because, because, if you change the definition for one machine, what about if there's other machines publishing using the old Okay, right? So what we like people to do, yeah, what is the ideal architecture there? What's the recommendation just diversion them. Okay, like so if I have a motor, let's say, and I have a motor, UDT go motor v1 right? That's what we're using. Then if we need to do a difference, we can inherit from that and make some changes and have that be motor v2 and ultimately, we'd see both of those would come in, you know, to ignition or to any other third party. Would
Zack Scriven 5:40
the EAM module be able to help you with that versioning? Or, how are you going to be? How do you manage like, Could you do it with Git? Like, how would you manage your UDT definitions?
Arlen Nipper 5:49
Well, a lot of our customers, from a corporate standpoint, they will come out with a set of udts and then put those in a repository,
Zack Scriven 6:00
like a document, like a definition or information model.
Arlen Nipper 6:03
Now, if you're any facility, if they want to use a pump unity, they pull that UDT down. Now, okay, if they're going to modify it or change it in some way, then it would be pump v2 okay, right, so you've got that, but you ask, you ask about all the way into snowflake. Okay, so from there, we're publishing spark plug to the IoT bridge, and what the IoT bridge does on the left side, it knows all about spark plug, but on the right side, it connects to the snow pipe streaming API for snowflake, which basically is free ingest into an SQL table in snowflake. Now that's raw. Every spark plug message is in the spark plug raw. The cool thing is that we didn't, there was no digital twin service we literally use. That's SQL database. So we use a schema. We use the schema off the spark plug. So if I've got a UDT and I know a schema, I can go find that UDT, put it up into a table. Now, from there, I can create all my tables from all my machines from those definitions.
Zack Scriven 7:29
Yeah, and I think if you guys haven't seen their demo, you gotta check it out or ask them, reach out to them, asking the show to because it was I without seeing how those information models are available in snowflake for doing, you know, machine learning or analysis. It's, it's kind of hard to understand how valuable that is unless you've tried to do it before, where then you're, you know, you don't want to manage another information model in another platform. You just want to define it in one place with a single source of truth and make that available. I mean, frankly, in any cloud. But in this use case, you demonstrated snowflake, how much work did it take on the snowflake side to get that to happen?
Speaker 1 8:09
Yeah, I mean, all that's happening in snowflake, it's, it's actually really, really beautiful in that, all the spark plug messages going into a raw table. So every value that comes in, whether it's part of UD or not, right? Any publish is going to go in there, and there's just some stored procedures that are syncing the data, you know, from the raw into another device message table, where we're breaking out the whole the whole metric, right? And, and, of course, knowing about those UDS and so part of that is it'll create the view, so all the individual values are going to another table, and that's all the history, right? Because if you know spark plug, if you look at the actual message, it's JSON, right? If I store the JSON on a table, I don't want to do compute to have to break it apart. So the syncing is to break it apart make a nice, wide table that is for every single metrics value, right? Or, sorry, tall table, but broken all the columns in terms of all the the metric values, and then all the Unity definition is, is we look at that, that that structure, and we build a view that normalizes data and as a wide
Zack Scriven 9:11
and so you're defining that view through the snowflake API. And, no, it's just a store procedure that, yeah, do you like do, and snowflake does that automatically. Or do I have to
Arlen Nipper 9:23
do? So would you install the IoT bridge? It's, it installs all the store procedures for you.
Speaker 1 9:29
Okay, yeah, okay. It creates that structure tables and store procedures and the functions, and then it's just called automatically as data comes in. And it's, what's cool about snowflake is it has streams that you could define, where it all that table data coming into spark plug, raw the streams are keeping track of what's new. So we're only ever processing what's new and see, you know, if there's new definitions being created or, you know, with the new data that's in there, and it's, it's really simple and but. What's powerful about the view for every UDT is that I get just one simple thing to query a select star from my compressor UDT view, and
Zack Scriven 10:10
that gives you one one row per time stamp, and then one column per value for per UDT attribute. Yeah, exactly, yeah. And that makes it actually usable, versus like just having value timestamp, value timestamp, and then having to query all of
Arlen Nipper 10:27
those objects yourself that turns into a data swamp. That's
Zack Scriven 10:30
the data swamp, right? So this is really key. And so what was the feedback? I
Speaker 1 10:39
think we got a lot of ways, anytime we do this demo with customers, they're just like, holy moly, because we specifically Arlen. This was, you know, you wanted this in the beginning. We could have everybody asked us, we could connect ignition directly to snowflake As historian and log data to a table, right? But you specifically chose MQ T's spark plug, you know, for a reason. So why don't you give some that that that background, well,
Arlen Nipper 11:03
again, if you did coming off the back end of the SQL database, all you get is a time stamp and a value. We wanted all the contextual data that goes with that as well, and to get that out easily and open, we don't want to be sending data around with a proprietary
Speaker 1 11:25
application. That's it, right, using open standard. If I had a sensor, if it was up to 22 epic device, if it was another sensor that had a, you know, either data or data model, and it publishes to the broker, it would automatically be ingested and stuff like, right? And then we would be able to query it. So it's not just an ignition solution, right? It's the idea of being able to take any kind of data out, right?
Arlen Nipper 11:47
As long as it's spark plug, boom, it'll go into snowflake, yeah. And
Zack Scriven 11:51
again, I wish you guys had more time to show, because I feel like you could have shown a lot more, like with the snowflake side, you know, like, oh yeah, it was, it was good to see it go from edge to cloud and then into snowflake. But then, you know, once, like, because a lot of people probably aren't even using snowflake yet, but they could imagine, you could imagine how you could use it, and how the the bridge module, in combination with snowflake and the IoT platform of ignition, how it could be super valuable to absolutely managing your data. And
Speaker 1 12:23
I would have loved to go a little bit further in a demo that I've done previously, we did anomaly detection a different event. We did anomaly detection. Snowflake has a native in their service. And in fact, you could actually build an anomaly detection model just using a query. So we actually build a screen ignition where they could train a model on data. It's like the data select what they're looking for, train the model, and create the model, and then we would it, would have it run automatically and alarm if there was a not only that that was happening, we did it off of off to 22 they had a a car wash that we did anomaly detection on the freezer. So it was really cool. But there's so much more ml, there's generative AI that's in and stuff like, there's so much more you could
Arlen Nipper 13:04
take it out but, but remember, what I pointed out in the demo is, you know, we're very Walker said, always says, You have to have a uns database. So we had all this infrastructure in here, and I didn't want people to get confused that the broker is the UNs database, because it's not,
Zack Scriven 13:24
you're right. You need to have that hierarchy that is, in this case, is ISC 95 in a database to be able to make it accessible to other consumers. And also, I mean, really, this is one of the things that allowed us to be successful on that firebrand, award winning project over a decade ago was having an external database that drove our namespace and our tag development back then, it was the gateway area network. But we didn't manually create a single tag. We just used the tag creation engine and a database that drove those creation of those tags. Let me ask you this, guys, so what's Travis you, what's the most common question you've gotten here at the at the show?
Speaker 1 14:08
I mean, honestly, it's, it's been, how do you compare against other vendors?
Zack Scriven 14:12
Okay, how does that make you feel? And how did you I think you handled it well. But how is it
Speaker 1 14:16
from your perspective? Honestly, I think those are great questions, because this is the first time where we are challenging status quo. We're saying, Look, you know, as a customer, it's their data, it's their digital infrastructure. I mean,
Zack Scriven 14:28
you guys pioneered that approach. I understand, but
Speaker 1 14:30
it's really important that that a mindset is that the organization is where we want to create our digital infrastructure, and we want to plug in all these amazing tools into it, right? And and we want it based on, oh, best
Zack Scriven 14:42
in class, best in class and open. And that's what we're seeing, and that makes it actually usable, versus like just having value timestamp, value timestamp, and then having to query all of bring
Arlen Nipper 14:53
all this up to the top. And here we are, Thursday afternoon. Everybody's. Old show,
Zack Scriven 15:00
yeah? No, yeah. What was, was the most common question you've been
Arlen Nipper 15:04
getting? ARLEN, MQ, tt, you know, how did you invent it? How do you use it? What can we do with it? Next, things like that.
Speaker 1 15:11
The other big question that I got is, of course, because we're, we have an eight, a new release pending, yeah, a
Zack Scriven 15:17
three. What do you want to share about that? Yeah,
Unknown Speaker 15:21
and so we're gonna be releasing it here at the end of the quarter
Zack Scriven 15:24
beta. Okay, what are the main like, the headliner features of 8.3 okay,
Arlen Nipper 15:29
you got 20 seconds? Yeah,
Speaker 1 15:30
I know in the session I went really quickly on 30 seconds. Say what the big ones were, but there's kind of two big categories, right? There's a lot of people that have been using mission for a long time, and in terms of how they deploy it, how they manage it. They want better tools, especially kind of more developer focused tools. Okay, and so we're doing version control, so we're putting all the configuration of file system so you can use any kind of version control systems you want to put into a Git repo. Go for it. And you could use Git flows right to cool into production for your DevOps, all that. So having that config so
Zack Scriven 15:59
I could, so I could manage my UDT definitions, 100% I could manage my whole project.
Speaker 1 16:03
100% okay, and not only with that, we're gonna add on to it a REST API technician so that we can access all the status and configuration. So that you could use third party tools like Ansible or other automation tools to be able to to do fleet management, or to, you know, to get information into the IT tools they can manage ignition better, right? Be a, be more first class citizen to what they're used to. So those are really big. From a from a version control, then, like DevOps, you know, having a development testing, a staging or a production environment, there's, you know, we know there's differences, right? And ignition config, there's gonna be differences across those environments. So we're also building a new mode where you can actually, in ignition, define what those differences are.
Zack Scriven 16:42
Oh, and you can swap the gateways from like, Hey, I'm in development mode to I'm in a every
Speaker 1 16:47
gateway will have I'm going to be in production mode, I'm testing mode, or it will choose that config and run it. That's awesome. And so that way, here's what's important is, if I'm going to move from dev to, you know, production right now, people have to be careful about what they move over, right? No, yeah. With this, I can push it to repo. They can pull it after they, you know, ready for testing. They pull it. It uses the testing environment. We're good to go, right? So that's a really big, big piece for, again, for the people that have been managing it for a while. The other headliner features are event streams, which is a new module for very performant, low code, no code, way of moving data between different sources and handlers. In particular, customers want us to support Kafka, okay? And we didn't want to just support Kafka on an island, right? You want to have a framework you could build that into so that if data comes in from Kafka, we can move it to a tag or two database or wherever, or mem, Qt and vice versa, right? We want to be able to get that data around. So, yeah, those, those are kind of big hitting ones. You know, we're continuously improving perspective with offline forums and drawing tools. Nice. We've got, you know, a new historian based on quest DB that will be local to ignition. So for people that are new, they don't, they're not used to SQL databases. They could just get up and running. Okay? It's not proprietary. It's based on open source technology. They can externalize if they want to. So there's a lot of great stuff for everybody in this release. Awesome.
Zack Scriven 18:09
ARLEN, anything you want to share?
Arlen Nipper 18:12
No, we're just trying to get everything updated.
Zack Scriven 18:14
How was the Okay, how was the fireside chat? It was good. I like you. Thought it was valuable. I thought it was valuable. Yeah,
Arlen Nipper 18:20
everybody complimented me on everything. Matthew, Jonathan, I really hope that
Zack Scriven 18:25
Conversations like this and conference conferences like this help move the industry forward. And like I said at the beginning, you know, we wouldn't, you know if, if you inductive, hadn't partnered up to make this technology available in the industrial consumers, then I don't know if we would be here today, we wouldn't, right, and so we owe a huge debt of gratitude to inductive automation and serious link and Arlen Nipper being the CO inventor of MQTT, it's an honor to have you guys here. Any calls to action? You
Speaker 1 18:54
know, I think the biggest call to action is get involved in these different open standards, make sure that there's wide adoption and use and benefit of them. So MTV spark plug is good example, right? We're part of the working group. Yep, we want more people to get involved, right? And and we want more vendors to support these technologies, because if we all have it, then the customer is going to win, and that's what we're here for.
Zack Scriven 19:17
Okay, I'll give you guys a plug. I'm going to put a call to action. You got to check out the ICC this year. I heard you guys are going big. I was at the very first ICC. And I mean, it shared some of the same magic. It was the first time I had ever been to a community conference where I felt like I was home. I was part of other people who are like me. And so definitely check out the ICC this year. So first time in Sacramento, that's huge. It's a new,
Speaker 1 19:41
new location. I think we're gonna, I'm hoping we get 2000 people there. Yeah, that'll
Zack Scriven 19:45
be epic. So gentlemen, thank you again for being at the show, being inductive automation and serious link as co sponsors of a so of the of the conference, and great job on your session. Yeah, that's it. Basically, we'll wrap it up here. Thanks
Arlen Nipper 20:00
so much. Thanks. Zack, my pleasure. Cool. You.