Magnus McCune 0:00
It doesn't feel like a first time conference to me at all. This feels like an incredible event. So round of applause for the event. I think it's really cool. Notes. Session, perfect. Yeah. So this is hive M, Q's prove it session. We're going to walk through a bit of a different type of session. I think, as was mentioned, I'm a I'm an architect, I'm not a marketer, I'm not a business development person. Fundamentally, I think in diagrams. So that's actually what we're just going to do, is I'm going to walk through a diagram. I'll show how we implemented some things, how we work with our customers, and we'll walk through step by step, how we build an architecture, starting with what the problem is. And so if we look at this is a very simple drawing. This is the sort of drawing I do when I'm bored. So if we look at a very simple problem statement, we have devices, PLCs, sensors, robot arms, whatever it might be at the edge. We've got various applications and functionality and additional systems like mes or historians or HMIs running within that site. And then we have our core applications. We have our core technology, analytics, business applications, industrial applications, our persistence layer, our integrations layer, our processing layer. We have all of that, maybe within the cloud, maybe within data center. And fundamentally, our question is, how do we integrate all of this? And we've all come together to prove it. So a common pattern that I think we all recognize is the idea of a unified namespace. And so how do you what is a unified namespace structured off of? I'm just seeing that I don't have notes in my notes section, OK, so what is a unified namespace structured off of very commonly, an MQTT broker, and manufacturers tend to choose hive MQ, as Walker was so generous to say at the start, if you're looking for a broker, there are a wide variety of brokers out there available to you. If you're looking for an enterprise ready broker, Hive MQ is very likely the choice for building the foundation of your unified namespace. I'll touch more on all of the different things that are required to build a unified namespace, because it isn't just an MQTT broker. I think we've all realized over time that you can't just use an MQTT broker for that as much as we'd love that to be true, it just isn't. And so with hive MQ you can run that on your existing technology. I've listed IPCs or VMs or Kubernetes even, but fundamentally, we realize that not everyone has a modern, up to date sort of Kubernetes infrastructure at the edge. Many don't want it, and so even for today, we're actually running on a 10 year old server for the actual conference. It's a beautiful 10 year old server here that's actually powering the unified namespace, as well as the most modern technology, which is some cloud servers that that Walker has managed. So whether you're running on the very most cutting edge technology, or you're running on on some legacy gear that happens to be at your site, Hive MQ, is ready for you and ready to be performant. You might notice that we still have a question mark on this slide. So yeah, I've got those core applications in my data center or in the cloud. How am I actually integrating across to those? And for that, we're going to look under the covers of the hive MQ broker a little bit to get a sense of what allows for that. And there's a couple of components that I want to talk through here. So we'll break down each one of these control center I'll show a little bit later on. I don't think it's the main attraction here, but we've got data hub, which is a really key piece of this. We've got our extensions, which is obviously a really key piece of this, and, of course, our cluster broker infrastructure that I'll touch on some more. So what are those extensions? I think it's an important part of what we're talking about here. Those extensions. Walker mentioned our enterprise security extension. He was very generous to talk about it this morning, and fundamentally, that's what allows for security with our customers who need that top of the line security. But our integrations extend far beyond that. We have integrations for things like snowflake, our friends at snowflake, built on their native snow pipe. We have integrations for Google Cloud, our friends at Google Cloud, built on Google Cloud, PubSub, that lets you, as Walker puts it, get all your data in BigQuery and then use Looker dashboards or whatever else you might need to visualize that to begin building solutions in the cloud, our integrations effectively extend to every common data sync that you might expect. So databases s3, data lakes, whatever it might be. And of course, you can build your own custom extensions, and that's an important that's an important component of what we do at hive MQ, is we help our customers who need to integrate with something that they don't necessarily know how to to do that integration. Many customers are also looking to integrate with data while it's in motion, looking to transform, contextualize, normalize, validate data while it's in motion. And so this really important component called Data Hub, exists within the broker. And this is something I want to keep coming back to, which is these components aren't an external subscriber. These components aren't a separate piece that exists outside of the broker. They're built into the broker. So as that data is in motion, as that data is flowing, we're able to do transformation operations. We're able to do contextualization. Operations, we're able to take that invalid message that you see going across the screen there and either drop it to a different topic because it's invalid. We can, we can drop it entirely without redirecting it, or we can transform it and make sure it's right, and make sure your AI systems or your InSight Systems, or your analytical systems have good, valid data as close to the edge as possible. I'll speak more on that later. So this allows us to incept messages before any consumer has a chance to receive it, before any consumer has a chance to see that message and ensure they're valid when they get where they're going. And all of those components, the extensions, Data Hub, all of those different components are designed to operate at incredible scale. They're designed to operate with some of the world's largest manufacturers. We'll talk about some of them in a little bit. But they need to scale along with your broker infrastructure and and one really important component here is, once again, if I have that idea of a micro service or an application that's outside of the broker that's consuming these messages and then trying to put them into a data store. That's a really complex architecture we've actually seen. I'll touch on it briefly a little bit later. But we've actually seen some of the vendors here struggle with consuming the volume of messages coming through. We shared this morning something like 46 million messages per hour, total number of publishes. And when you're trying to scale that, that that data ingestion technology outside of your broker. It's really quite challenging with our extension mechanism, even if that's a custom built extension that runs in the broker, it scales with the broker. It shares memory and process with the broker, and so you're able to scale those quite effectively. And it's true of data hub as well, by the way, where we have that data hub module built directly into the broker, it scales along with everything you're doing. So let's keep expanding on our architecture. So fundamentally, I've drawn this. I really love this line that says MQ TT here, because that's, that's what I wish every customer had. We'd love it if every PLC was a lovely Siemens PLC or octobox PLC and spoke MQ TT and we could, we could not worry about any other protocol. That would make me really happy. But I live in the real world, I live with real manufacturers who have, you know, brownfield scenarios in which not all of their PLCs speak MQ, TT. And so I need another piece of technology here. I need to free that data from, say, field, brush, field, bus protocols. And luckily, Hive MQ has thought about that. So not everyone knows this, but we have a product called hive MQ edge. It was launched almost two years ago, and it's an open source product. It's an open source protocol adapter. So when you have those field bus protocols at the edge, whether that's OPC, UA, Modbus, whether that's Siemens, ads, back off a whole slew of them, when you're trying to get that data into MQTT, when you're trying to get that data into your unified namespace, Hive, MQ, edge should be your first stop or first consideration. Open source. You can start right now at no cost. We obviously have a commercial variant, if you need commercial support, enterprise support, that sort of thing. But you can start freeing that data, democratizing that data, getting that data out of those devices today at no cost. It's an important part of our offering. I It All right, bear with me for a second. So can I get my speaker notes? Thanks. So now, now that we have a sense of the data, how we get it into NPT, this is a really great design pattern if you're just building one location, if you have one site, this is a perfectly reasonable pattern. You bring all your data into unified namespace, you integrate your different applications, your different services. But the next question is, is, how do I how do I get that reliably into my applications? And I'm showing our MQ extensions here. That's really great. But if you're, if you're building a unified namespace, a global, unified namespace, you don't want to end up with 10, 100 localized name spaces, individual locations that are all sort of containerized in their own and so a pattern that we've seen our largest manufacturers take on is this idea of, let's bridge that data. And this is actually how the conference operates. Let's bridge that data to a centralized broker, use that reliable protocol, that MQ, TT protocol, that is resilient to poor networks, to poor quality networks, and let's bring that data into a centralized broker and then use that great network that we have in the cloud, that we have in our data center, that that really reliable network that is there to then integrate with, with processing layer, integrations layer, or even build a venture driven applications. Once I have that unified namespace in the cloud, in my data center, I'm now able to build event driven applications directly on top of an MQTT broker. So I have that additional advantage. A good friend of mine who works at an energy company likes to say, we don't build plants in convenient locations. We don't we don't build facilities in convenient locations, and I think that's also true in the manufacturing space. We don't always have the best networking. We don't always have the best connectivity. We sometimes have these really terrible networks. And using MQ, TT as that bridge from your plant to a central, unified namespace as a really reliable way to ensure that all that data is getting
Magnus McCune 9:58
through. And once you've established this. One pattern, if your if your goal is to build a truly unified namespace, along with maintaining that reliability, that scalability, that availability, you might use a pattern like this, and that'll work really well for one site, and that site becomes a reference that becomes the pattern that works. And so now that actually becomes a blueprint that we can use over and over and over again, to scale to more facilities, to scale to more and more locations as we need to. And so this is a pretty complete representation of how hive MQ normally builds an architecture with manufacturers. We've worked with customers across, actually a number of verticals, pharma, discrete manufacturing, a wide variety of different verticals, as well as outside of that, logistics and energy, as I mentioned. So this is a pattern that we've seen kind of over and over again, and in many ways, this is actually the pattern that we've used for the conference. So we started with a problem. We started with a problem statement, which is, I have all these data producing systems. How do I consume that data? How do I actually make that available? And we've walked through, sort of how hive MQ typically does that. So you might say, So prove it. That's the name of the session here. So go ahead and prove this for me. And I'll start by saying, by virtue of the fact that we're here, we have sort of proved it. Almost 40 manufacturers, 40 different companies have connected to hive MQ broker that underpins this conference. There's one node running there, and as I mentioned, two nodes running in the cloud. We've got, I think, something like 300 clients connecting over the course of the day, 46 million messages per hour. That's actually a bit of an average. It's been higher than that at a couple of points. So we have this incredible scale running in this building right now, and as I mentioned, in the cloud as well. So let's have a quick look at the hive MQ broker and see what that looks like. Let's make sure I tab in the right direction and log in. Oh, apologies. So this is actually one of the conference brokers running right here. We can see right now we've got 16,000 outbound publishes, 435, inbound publishes. And so we're delivering just some very quick math here. I think that's 50,000 or 50 million messages per hour that we're delivering reliably to the clients that are subscribed here. You might notice this one here queued messages, 1.8 million. What a queued message is is some subscribers somewhere on the network isn't actually able to receive all of their messages. They're a little too slow or not scaled efficiently enough, and so 1.8 million messages are actually waiting to be delivered to some subscriber that isn't quite keeping up. I'm not going to call out any vendors here as to who exactly this is. So absolutely we have this idea of incredible reliability, availability and scalability. It's a core tenant here at hive MQ, and something that's really important to us. I'll show a slightly different view if I can log in. This is our new control center. It's actually in beta, currently about to be released. This will show us a slightly different view into this that I thought was interesting. So we can actually see over here on the right how much data has moved since Sunday. So this broker was brought online on on Sunday, restarted on Sunday, and since Sunday, this, this, this two node cluster has moved 360 gigs of data from one node and 193 let's call that 660 gigs of data just in Sunday have moved through these brokers reliably with without message dropping, Qs, one messages, and consistently moving that data using relatively little resources, by the way, I'm using, let's call that 15% of 12 cores, little under two gigs of RAM on average. So relatively low resources are able to keep up with a moderate throughput. I'll touch briefly on those integrations. So we talked about all these different integrations that we have. I've got Google Cloud, PubSub, Kafka, very common use case, Amazon, Kinesis, if you're an AWS shop, of course, those data lakes and those different pieces. So absolutely, all those integrations are available to you and ready to be deployed at a need for the conference itself, we're running two we're running the security extension that Walker spoke about earlier today, and we're running the bridge extension, which is allowing the local broker here to have a failover in the cloud. If we had a local networking issue, we could fail to the cloud and vice versa. If we had a cloud issue, we could fail to local. So that bridge extension is installed. And let me tab over to back to my slides. So I'll I've talked about the broker. If you want to hear a little bit more about the setup for the conference, come to our booth. I'm happy to share. I'm happy to share. I'm happy to dig deep into any of those extensions. If you want to understand the use cases. We've got an excellent team here today, Matt and Ravi and Simon and some of our team is here today. So absolutely come by. I've also been asked to talk about cost, and Walker likes to tease me about cost on hive MQ, but it's actually pretty straightforward, and. Today. But first I want to talk about value. And as I talk about value, I'd like for you to think about $1 value. I'd like for you to think about an actual monetary amount. And these are all conversations I've had with a customer here or there over the years. So the first capability that hive MQ really promises is this idea of zero message lost. If you mark something as Qs one, we will not lose that message. Or Qs two, we will not lose that message. And so as I'm asking this question, think to yourself, $1 amount. What is the value of guaranteeing that every critical data point is captured, preventing even one regulatory slip up or product recall that could cost millions in fines or lost inventory? This is a real scenario, by the way, for one of our customers, I won't say the industry, because I think it's a little too leading, but it's really important to them, seamless integrations. We've got our SIS here this week. What is the value of reducing costly integration projects by a couple of weeks, saving on engineering hours, delivering revenue, generating solutions a little bit sooner, speak to high availability, something we pride ourselves on. What is that value of avoiding even one hour of expected downtime, especially when each hour can cost 10s of 1000s in Lost output? And lastly, scalability? There's a manufacturer that I've spent most of the last year working with who's a global manufacturer, there's 300 plus sites. What is the value to them, or to you, of having instantaneous oversight of hundreds of factories with incredible data volumes, where a 1% efficiency gain results in translation into millions of dollars in annual savings. So as we think about value, I will, of course, share actual numbers. I won't take the easy answer out, although I have one, one last piece, which is this was proven, was pitched as a proof of concept. It was pitch as do a 12 to 16 week proof of concept. And here at hive MQ, for proofs of concept, we actually, we don't normally charge customers for evaluation licenses, actually, the licenses that were provided to this conference, or what we call evaluation licenses, our excellent solutions engineering team, Matt represent, shout out. Our excellent solutions engineering team helps the customer conduct the POC. But I also realize that's not the answer we're looking for. So for this installation, for this setup, the cost to to the customer to prove it, if they were to purchase a solution, is $30,000 so our broker technology, our bridges, our enterprise security extension, the total cost for all of that would be, would be about $30,000 the POC itself, the the we actually didn't spend a ton of time, but Walker and I spent maybe an hour on a phone call, a couple discord messages back and forth, a late night yesterday, and that was the entire setup time. So not too much POC time. You've grabbed a microphone, yeah.
Walker Reynolds 17:46
And I just, I just want to say something that I don't think I've stressed enough. You know, if you think about what's actually happened here, none of the vendors got access to the architecture until october 28 they received the document on october 28 they received the infrastructure on october 28 they received everything on october 28 and then we put hurdles up in front of them to make it as hard as possible. I mean, really, we made this hard. And I want everyone to think about what's actually happened here? There is a manufacturer in Dallas who is a flexible packager, who we've copied their data infrastructure, and we are serving it out using hive MQ as the backbone. That's really at the end of the day, it's my sequel. It's ignition, and it's hive MQ. Okay. There's 42 to 50 million messages per hour being sent right now, I intentionally used a 10 year old Dell server that we pulled out of a closet, and then our backup is one that Dell sent to us, which is brand new, which we're not even using. We couldn't even break the 10 year old server. There are 36 proof of concepts that were completed in 16 weeks on this infrastructure, not on 36 different infrastructures, but on this infrastructure. And I have to be honest with you, you know, I you, if you watch videos, I haven't always say, I've always saying the praises of Hive MQ. At the enterprise level, I've always bitched about the pricing. However, I you know this isn't possible without hive MQ, and it is incredibly impressive. Imagine if this customer in Dallas said to us, hey, Walker, I want you to go to 36 different companies, give them access to our data, have them deliver a solution in 16 weeks, and they all have to talk to each other. I mean, that's what's been achieved here. It's really kind of crazy. And the simple reality is, and I was having this conversation with mosquito pro because they were like, hey, Walker, it's not true. You can do. Username thing with us, you can Well, that's true, except we have three instances of Hive MQ running. Two are clustered. One is not three nodes. They are bridged together. So if somebody publishes a message into the cloud, it comes down into the plant, down here, into the virtual factory. If somebody publishes a message into the virtual factory, it goes to the cloud. It doesn't matter which, whether you're connected to the load balancer, or whether you're collected here, connected here. But when the guys at mosquito pro said, Hey, you can do that. We can do the filtering by username. To make that is true, you can, except you have to configure that at all three nodes. We didn't do that. I could whether I have one node. Or whether I have 1000 nodes of Hive MQ, I have a single enterprise security definition. And if I add a user to the enterprise security extension using the MySQL back end, it populates to all 1000 nodes, or in this case, all three. It is incredibly impressive. And one of the things I do, I just want to make sure hybrid Q gets their due, okay, and Magnus specifically, and Dominic, he's not bullshitting you. We literally spent an hour and 15 minutes total in 16 weeks, an hour and 15 minutes, and we spent 45 minutes out here shooting the shit last night. Shit last night. But anyway, Magnus profound, but make no mistake about it, without hive MQ, this would not be possible, and it is incredibly impressive. 36 proofs of concept for the same customer in 16 weeks, same data infrastructure. It's nuts.
Magnus McCune 21:39
Thanks, Walker.
Magnus McCune 21:48
You might notice that I'm only 20 or so minutes into into my Pruvit session, and I'm not ready to wrap up yet. I actually have one more thing. So HIV, MQ has been working on something special. I'd like to invite our CEO, our co founder, my friend, Dominic oberyer, up to the stage. And while he's coming up, I'll play a video.
Magnus McCune 22:15
Could we get audio? Please? You?
HiveMQ Pulse Video Announcer 22:23
Right? Hive MQ pulse is a next generation distributed data intelligence platform empowering businesses with real time actionable insights with a flexible architecture built on the power of MQTT and designed to support a unified namespace approach. Hive MQ pulse unifies and contextualizes data across the enterprise for smarter decisions exactly where they matter most. With hive MQ pulse project, teams can unify data management with advanced cataloging, transformation and governance for consistency across the enterprise. Gain real time insights by acting on data in motion for better decision making and enable distributed intelligence enhanced by AI and ML for smarter decisions at the source, we can help you manage, transform, govern and derive insights from distributed devices and systems, even in resource constrained and high throughput environments With hive MQ pulse, your IoT data is always accessible and optimized for impact, powering a unified namespace, delivering actionable insights, enabling distributed intelligence and ensuring your data is AI ready. Hive MQ pulse is now available in private preview. Join us in shaping the future of distributed data intelligence, visit hiveq.com to learn more and apply today.
Unknown Speaker 23:48
Okay,
Dominik Obermaier 23:53
thank you, everybody. So we're incredibly honored to that Walker gives us the stage here that we are now announcing the launch of a product we've been working on for the last 12 months. We are very humbled to work with the largest manufacturers in the world, big chunk of fortune, 500 companies. Fortune, 2000 companies, the largest logistics companies in the US, largest energy companies in the US, and also biggest pharma companies in the US and so on. And what we saw is there is all of them are building a unified namespace architecture. Some call it unified namespace. Some build it and don't even know it's a unified namespace. But they all are building architectures. What we've seen at this conference all over the place, and there's really three problems that we saw in really any customer we work with, and the first is discoverability of data. Once you integrate all of your systems, how do you actually discover what you have available, especially if you are in very complex environments? All our customers have complex environments, different stakeholders. They have IT people, ot people. There is business people that need to be convinced. Yes, and you need to show the value at some point of time. So what I've heard in many conversations with you, there's also the reality you're in, and it's very hard today, still, with all these technologies to get access to a data and discover data. So the question one needs to answer is, okay, the data is here, but what is actually here? What data can I use? And the second thing is data governance and context. So once you have the data available, the very next question is, okay, but how can I use it? Like, how can I unlock the value out of this? And the third thing is constantly generating insights, like, if you have the data somewhere available, this is step one, but data alone, just being somewhere doesn't provide any insight. And so the problem what we saw with all our customers is they have the data now somewhere in the unified namespace, but how to build upon that, how to generate the insights continuously and where it's needed. Sometimes it's needed on the shop floor, sometimes it's needed in the cloud, sometimes it's needed everywhere. And so we teamed up with sis, with customers, with partners, and also other vendors to say, OK, how can we solve the problem and everything? What we do at Tyvek here is build on open standards. We founded the company 2012 because we were disappointed about all the proprietary solutions out there. And you're thinking, okay, there must be a better way. There must be a better way if you want to integrate an open standard dissolution, and we build up upon MQ dt, we had specify MQ standard that is upon amputee and build all our products on top of the MQT technology. And we believe innovation is driven by open standards. Open Standards eventually are going to win, and the manufacturing space has been slowly moving compared to many other industries, to be very frank, but I'm very, very glad to see, and like this conference actually proves it, and no pun intended, that there's really a big, big industry who wants to change for open standards. So what we are launching today having good powers. What you see in the video is the very first product that's completely built on open standards. It's built upon MQ, DT upon, GraphQL, rest, SQL, dtdl, and all of the let's say acronyms you might hear, having the password supported. It's a true open platform that's being used for building a measuring unified namespace. And the second thing is, it's built for for a future proof deployment. What that means is, everybody who's building unified namespace today is building their backbone, the digital backbone, for the next, let's say 2025, years, and we don't know yet what technologies will be there in 20 years. It's impossible to know there will be new applications spinning upon the namespace. And I think also, when you saw workers keynote today, you saw like, I think these are old concepts, quote, unquote old, and now the industry is just picking it up. And even the last 20 years, so much has changed. The concepts didn't change, but the underlying technology changed. And so what type of pulse allows you to do is, is actually maintain and create a name, one namespace, not multiple namespaces. One namespace enterprise wide, there's future proof, and you can maintain it's built for an evolving uns. You don't need to get things right the very first time. You just need to get started and build new use cases upon it. And one thing, what we've seen, once you are building a unified namespace, it's viral. Other departments, other parts of the company are picking up upon this, until you have this digital backbone, and we have many examples of our customers. They are a small project somewhere started, and then suddenly the whole enterprise, a few years later, was building upon the same unified new space technologies. And how do you pass this, that backbone that allows you to build on this? And the other thing is, and this is something, what we believe is the future of the Unified namespace. Having your palace allows you to create a distributed, unified namespace. This means you can have a unified, the same unified namespace in one site, in 10 sites, in 100 sites, or even up to 1000 sites. Most of our customers don't have so many sites. So the largest manufacturer we work with has 300 sites. But we work with energy companies, they have more than 1000 sites. And and so having your pulse allows it to manage off this at any scale, from a proof of proof of concept up until the largest deployments in the world, all of the same software. And now this was the marketing part, but we can go to the next slide. How do you pass in the sentence? If you, if you take it away, it's, it's a distributed data intelligence platform that transforms unstructured data into actionable insights anywhere and everywhere. And Magnus will now show a dev. It's the very first time nobody have ever seen a demo in public. It's the very first time where we show the product. And so I hope you're as excited as I am.
Magnus McCune 30:17
I'll touch very briefly on our architecture that we've been building slowly over the last half hour, little bit more, just to show that this is really where pulse comes into play. It's it's this idea of augmenting that unified namespace with with with a unified query and discovery layer, with distributed architecture of compute agents and capabilities. I'm getting the two minute warning, and I plan nine minutes for a demo, so we're going to have to curtail it slightly. I have the same disease as Walker does, which is, I talked too long. Sorry, Walker. So sorry, this says demo. That's what I'm going to do. All right. So let's flip over. I have some things selected here. So this is hive MQ pulse. This is the new tool that we've built. I'll go through and walk through a couple of the most interesting features in the short amount of time that I have. I could walk you through how I build a namespace from a series of open template open standard templates, by creating a whole namespace with those templates or with reusable components that I build myself. But I think it's actually more interesting to discover the conference namespace, the namespace of the conference we're actually sitting at, because most of our customers aren't starting from scratch. They already have components out there. So let's go ahead and discover a namespace. You'll see this is Pruvit, virtualfactory online. I've got some credentials in here, and I'm going to go ahead and start discovery while that's happening. Good thing. I got connection established while that's happening, what's happening in the background is I'm actually listening to all the traffic that's going through that broker. I'm listening to not only the topic structure, but actually I'm interpreting the payloads to understand the context, the meaning of the data that is going through this unified namespace we see now that enterprise has popped up here. I actually hit my 5000 topic limit more people, or, sorry, the 5000 topic limit that I'd selected, more people are publishing than when I was testing this earlier. And I can go through and browse the namespace that we've all become very familiar with, Dallas press. And right now, this might not seem all that significant, but I actually have the ability, let's select one of these, press 103, I actually have the ability to assign additional context to these nodes as I walk through so if I go into press 105 I can see I've got my line here. And even things like, if I go into this OE calculation, we all know OE is a percentage, and so hybrid Q Pulse is actually determined. Well, if that's a percentage, then my most useful data type is probably a float. And so I've discovered my unified namespace, and I'm now able to build it out. I want to do one last thing, just to show that this is a distributed architecture, so I'm going to skip through here. I'm going to use a link to get to this temperature value. I'm going to bring my unified namespace, my data model, my information model, here together with real data. So I'll actually sample some data here, and I'm now seeing data values come through. I might do a transformation, something like selecting a transformation script Celsius to Fahrenheit here. It's a very easy demo, and it's going to take a second, but what's happening in the background is a whole series of different distributed components are actually coordinating to now start transforming that value. What you might notice, though, is, at no point have I said I need to go to this piece of compute and this node in my in my edge to start this transformation, I've declared what I want my unified namespace to look like. I've declared what I want my information model to look like. And the underlying infrastructure has taken, done the hard work of figuring out where I want that to run and what component I want that to run. Now there is, of course, infrastructure I've got. Actually, I think this, this in particular, is running on this filling line one agent, but I have this distributed infrastructure, and I just tell the system what I'd love my unified namespace to do. It's a declarative approach, and the infrastructure worries about making that happen. I'm completely at time, so I'm going to drop here. I'd love for you all to come to our booth and get a further demo. We'd love to spend more time with you going through this. And of course, come, come, come, meet with us and talk about whether this is applicable. Got Q A here? Thanks, guys.
Jeff Winter 34:13
Yes. Some impressive stuff you announced there. So we're going to do the Q A. I'm sure you may even have a lot of product questions for that. I'm going to try and focus primarily first on the demonstration portion that's beneficial to what they showed because they're obviously around if you want to ask product questions in addition. So I'm going to start with my question that I'm asking everyone is what you showed up there, and I think I know the answer, but I want to hear from you guys. I want to make sure we understand that the anchor of what's out there, people didn't use what you did, what is the alternative? And we're not talking specific products. We're talking about ways of doing it, and how is yours different?
Magnus McCune 34:51
Yeah, it's a fantastic question. I appreciate you asking it. I think the fundamental answer, and my experience with our customers, is a Word document. We go ahead. We specify our the structure of our entire unified namespace, what the individual topics mean, maybe, what their value should be in something like a Word document. In my opinion, there hasn't been a tool to fundamentally manage the metadata, the structure of a unified namespace, until pulse,
Jeff Winter 35:17
okay, can we get the questions show up here? Because I can't see them. 30k per year, per month. What's the cost? Oh, apologies,
Magnus McCune 35:27
yeah, I guess I didn't clarify that. That's 30,000 per year. Brew news annually. Okay,
Jeff Winter 35:33
why should someone that was one of the well, let's be more specific. Why should you choose your product over emqx? Do you want to take this moment?
Dominik Obermaier 35:47
Yes, I can take this over. So, so there's multiple brokers out there, and MQT progress in itself. At some point, they are fundamentally a commodity these days. So you had open source. MQ X is open source. One of the key features Why emqt X is being used, it's because it's free. And compared to, for example, mosquito, it offers a cluster. I think in the US, it's a bit tricky. I mean, EMQ X is a it means a Chinese software company. So sometimes there is regulations that makes it hard to deploy that. When it comes to features, functionalities. It's about integrations. I mean, we have a lot of sis here, so I think you should talk to some of the SIS and ask a question here. We believe the progress fundamentally better, but we can talk at the booth about it.
Jeff Winter 36:37
I'm going to interject one real quick before asking, What are these? Back to what you kind of demonstrated, one of the perks being your scale. What sort of scale are you talking about? What can you do?
Dominik Obermaier 36:50
So, so trying to take the question, yeah. So we scaled from a POC with single asset and a single MQ T topic to the largest customer that we run. It's, it's a connected car platform. So I believe it's the largest in the world. Actually, they have 30 million devices. So a device would be like 30 million assets, if you would translate this into manufacturing. And they have namespace of 400 million topics. What this means is, this is the scale is ridiculous. We talk about billions of messages moving around here per day, so it pretty much is much, much more than any manufacturer would ever have. And we work with the largest manufacturers in
Jeff Winter 37:33
the world. Awesome. Thank you. All right, as a European based companies, Hive, MQ, sharing data or giving access to data at the behest of a non American government entity. If yes, which entity and why?
Magnus McCune 37:48
Yeah. I mean, fundamentally, while hive MQ has an option to run in hive MQ cloud as a managed instance, Hive MQ is something that most customers run in their infrastructure, so we never actually really have access to your data. Your data lives in your brokers that you manage in our cloud company. I'll let Dominic answer this, but we do not share share data. Yeah,
Dominik Obermaier 38:08
we don't share that at all. We don't even have access to data. But on the other side, like Hive and cure, is we have, yes, two companies, a European, a German company. I kept it tight the fact that I'm German and and also a US company, so all of our US customers are doing business with the US. Inc, what we have is very standard when it comes to the data protection rules. Very frankly, the European laws are stricter, similar to the Californian laws, than the US laws. So I think you, and also, I mean, you shouldn't believe me. You should believe the certifications we have around us. So any kind of certifications, from very obscure certification, like T Zack to the standard to a standard certifications like like SOC two and others, we have all the certifications that prove that we, first of all, don't have access to a data, and the data which we have that we handing them responsibly, so we are certified by this.
Jeff Winter 39:06
This one just popped up and shot right through the roof. It had no likes, and then got 23 in less than the time you answered that. What's the difference between what you have and what Hi byte has?
Magnus McCune 39:19
Yeah, I can take that absolutely. So hi byte is a frenemy of ours. We often work with them on deals. We often have situations in which we collaborate. We often show up in similar places to them. So hive, MQ, the broker platform is fundamentally a data movement platform. We're designed to move data from from the edge to the cloud as reliably as possible. Hi byte doesn't necessarily play in that same space in the same way. Over time, we certainly have new features that overlap in some ways, but we still believe that we can collaborate with Hi byte in a meaningful
Dominik Obermaier 39:54
way. Yeah, and we work with many customers together with Hi byte. So I think if you look at the architecture that. Megan showed the piece of software that would be the most overlap would be the having to Edge Gateway, which is open source in our case. So, but since we're building open standards, like we love it when, when customers choose hybrid and we work together with them. So, yeah, it's a great software. It's a bit of a different scope than having the edge. Only having the edge is really only the protocol translation, and also the the adding the context and hybrid is is really hyper focused on this. So so we play very nice with them, and we're also very happy when we have joined customers,
Jeff Winter 40:35
and I'm doing their session tomorrow, so you can ask them the exact same question if you want to hear their side. All right. Oh, they got rid of the question on the Kafka, the difference between MQTT and Kafka, and I don't remember what the second part was, but I like that one. And like, why would you integrate them? I think
Magnus McCune 40:53
what you bridge? Oh, OK. So many of our customers use Kafka in the cloud. Once you have that data in your cloud environment, if Kafka is what you're comfortable with, or Azure event hubs, if you're if you're in Azure, or something along those lines, that's a relatively common pattern. But ultimately, the core difference is, is you're not necessarily going to want to run Kafka at the edge. It's quite a heavy application to try and run at the edge. The clients themselves is quite a heavy client. It doesn't run over a protocol designed for unreliable networks like MPT is so certainly, if integration patterns for you are more more well understood within Kafka, that's absolutely an option. But you can also build a venture of applications off of MQTT directly. So the most common, common pattern, I would say, is, is getting your data from the edge to the cloud in hive, MQ and mpgt, and then from there, if you're more comfortable doing integrations via Kafka, that's absolutely available too.
Jeff Winter 41:45
So I'm going to ask the integration question next, and then if you want to end with addressing the cost of the hive MQ policy, can apply. What are the means currently available within hive MQ for third parties to integrate any integration marketplace available? Oh, sorry, the second question, 25 likes,
Magnus McCune 42:05
okay, so our integrations are extensions. So I covered the extension mechanism, mechanism earlier, I would say maybe a third of what I covered are actually community extensions that are open source, non commercial, and those have been provided by either, you know, Hive MQ, directly to the community or other organizations within the community who've contributed those to HIV and queue. So our mechanism for integration is is, is our extension system. Many of our customers, by the way, choose to build their own extensions. They have functionality that is unique to them. We don't see it as necessarily something that every customer will need. So they choose to write their own extension or hire our professional services to create an extension. So our point of extension is, extensions awesome,
Jeff Winter 42:46
and then we'll say the last most like question. You're gonna reveal the price. So yes,
Dominik Obermaier 42:53
can we just have the slide on screen my slide? If we could. So, so that's why I answer this question. So policies and it's a private preview. In private previews, this means it's not publicly available at this point of time. It's available to selected partners, selected sis, and it's for selected customers also. So for everybody who wants to join the private preview, you can scan this or also check out our booth. We will also have some more elaborate demos where you can check it out at this point of time. We don't have public pricing available, and with air Frank at this point of time, the pricing has not been, not been figured out completely at this point of time. So what we're working with for the private preview, we're working with selected customers and partners, and it's pretty much free of charge for anybody who's getting into this, because it's not yet production ready. It will be production ready soon. And yeah, so we please, please check the news. Also feel free to follow us, and you will be the very first to know once it's available.
Walker Reynolds 44:02
And real quick, I'm gonna go ahead and address the EM QX question, because I know these guys wanted to take the high road, all right, so before we throw it to Amy, I wanted to talk about a conversation that Magnus and I had yesterday. I'll just be so I talked to a lot of the vendors about, like, what they do great, and what I think their their problems are specifically market problems, right? And we had this conversation yesterday about hive MQ and that is what, what is hive MQs problem? For a long time, it was like, oh, it's, it's a pricing issue. We it calculating TCO is really difficult for the customer, right? It's not cut. It's not hard for hive MQ to calculate, but for the customer, without touching hive MQ, it's, it's complicated. But today, the issue is, you really because of HIV MQ strength in Enterprise Solutions, and I'm here to tell you, it's just, there's nobody out there. You. Who can I be telling you what we did? I mean, I'm an architect. What we did, we had one option. I mean, Hive, MQ was the only thing we could have done and been able to deliver this 16 weeks at this scale. There has not been a single issue. I mean, think about it. There has been no technical issue with the data in any of these presentations. This is our first conference. We ripped it off over the weekend. I mean, it the data reliability is far exceeding what our expectations were. Here's hive M Q's problem. Okay, Hive M Q's problem is, is that hive M q needs to get into the customer when the customer first starts, before they start with some other broker. If they're going to need enterprise class features, you want to land hive MQ first. Why? Because there's a whole lot of configuration you want to get right on the front end in your architecture. When do you go with the MQ X over hive MQ? Well the answer is, is that if you're paying attention to the market, products change now that hive MQ has edge with connectors. And you used to be that you were comparing hive MQ community with the MQ X community, but that's not really what you're doing now. You're comparing hive you're here comparing comparing hive MQ edge with em QX community. And you're never going to pick em QX community in that scenario. Why? Because you might as well take the connectors. All right, that's a huge issue, like, I mean EMQ or hybrid Q edge has the connectors, has the converters, and so then the real question is, okay, if I'm never going to go the EM QX route there, then there's going to be scenarios where I select em QX, and the answer is EMEA. There are going to be some EMEA applications where it just makes sense to use EMQ x, but in North America, I mean, it's pretty hard to make that decision. I use EMQ x if I'm going to spin up a quick container, because I have a compose file that is already pre configured, but where we stand right now, you know, it's, I'll just be frank with you. I mean, the best broker in the world is hive MQ. You guys are dead even in benchmarking. Now, you were originally behind two years ago. It was like 30% less throughput in our benchmarking. Now you guys are dead even so in that case, it's pretty hard to make the case for EMQ x, it really is. And I love em Qx. I'm not bashing them, but I have to tell you guys the truth. I do. So at the end of the day, you know, you really should be landing, you know, it's mosquito, mosquito Pro, it's ignitions distributor, it's hive MQ, I mean, at the end of the day, those, those are really the options you should guys be looking at. And ignition distributor is chariot, so that it's a serious link product. So anyway, there you go, guys, thanks. Thank you all. Let's give
Unknown Speaker 47:51
Dominic and Magnus and hive MQ a round of applause. You.