Speaker 1 0:03
I left my walker down there to get up here. So really appreciate that. Hello, everyone. Isn't this fantastic? I just love this conference. Phenomenal job. Walker team. This is just, I've been to a lot of conferences. This is just knocking out of the park. It's really fantastic. Like you said. Todd Edmonds, I'm the Global CTO for smart manufacturing at Dell. I'm going to give get up here and just kind of talk a little bit about the end to end features of Dell. Why Dell? And then I'll turn it over to the to the smart guys, if you can get our slides up, perhaps. How's that better? Good. Okay, so I'm gonna turn it over to Jason and Jeremy here in just a minute. I just wanted to go through a couple of things, mainly, why Dell? And I know a lot of you people know Dell. We know the laptops, we know the desktops. And I asked myself that same question about seven years ago when Dell reached out to me to say, hey, we can really want to build this manufacturing business. So I decided to look deeper into what Dell had, in products, in solutions, in partnerships. And it really I thought, Man, this has something really important, and we can build something really powerful at Dell. And so I'm going to talk a little bit about y Dell. I'm going to talk a little bit about digital transformation at scale, and then I'll turn it over to our smart guys, and we'll actually do that prove it. So I could stand up here and talk to you all day long about why Dell, but I'm going to limit it, so I'm going to do a top 10 list. So I'm going to do a top 10 list of reasons why you want to partner with Dell Technologies, why you might want to partner with us. But just to make it a little fun, add a little fun to this, I'm going to, if you're tired of the buzzwords right now, sorry, I'm going to double down on the buzzwords. So I'm going to create a top 10 list of reasons why you want to partner with Dell and completely made up buzz words for smart manufacturing. So here we go, number 10 and the top 10 list of reasons to partner with Dell Technologies and or completely made up manufacturing. Buzz numbers, Dell expert resources across the globe. So Dell has over 100,000 employees, lots of really, really talented, really, really smart people at Dell, but not just smart, talented, but industry experts to help you in manufacturing. So I'm part of a global team of people who come from that manufacturing business. We have business outcome focus. We help you to leverage our technology and to accelerate your digital transformation across the globe. Number nine hyper nano blockchain, eyes, 9g quantum mesh. It's big, it's but it's small at the same time, and it's everywhere before it even gets there. Now available with Blockchain. Okay, number eight, this is a real one, enterprise scalability, flexibility and choice. Everybody talks about scalability, that's one thing that we're really focused on, that scalability and the flexibility and the choice. So if you think about smart manufacturing. We've been thinking about we've been doing this for a long time. What's been missing, other than the unified namespace and MQTT, is the ability to do the digital transformation implementations in a repeatable, scalable way. That's what Dell's focused on. You'll hear more about it with our native edge, with blueprints to be able to deploy applications like litmus that you just heard of, like ignition, that you that you're going to heard, hear about tomorrow. But we have these blueprints to make it really, really easy to deploy these applications wherever you need to, and be able to get that data that everybody's talking about, we're all talking about here. Be able to get that, but get that repeatably and at scale. So not just in one factory, but be able to repeat that across multiple factories across the globe. That's one of the big challenges that we see. You go in and you do it in one factory, and you can't repeat that. You've got to then come back out and you've got to go start at another factory. We've developed really great ways to look at that. We like to call it enterprise edge infrastructure. So it's enterprise grade infrastructure that sits at the factory to be able to run not just new applications, but applications in the future, but also re host a lot of your old applications that may be in the factory, and then do that to be able to say, OK, we're going to add maybe even AI applications down on the factory floor. Be able to do cyber security in that manufacturing floor too. So partnering with really great partners across the globe, a lot of them are here to be able to deploy this at scale and then have that centralized location where you can take that data, use that data, build AI applications if you want to build access that data in federated ways, rather than. Moving all that data to the AI. We're bringing AI to the data. Number seven, sorry, I just jumped ahead neuro, galactic synergics. This is a cosmic level AI framework, aligning factory operations with the vibrational energy across the universe. You ah.
Unknown Speaker 5:23
Number six,
Speaker 1 5:26
quantum optimized IoT harmonization. How about that one onboard IoT sensors and systems before they're even even built?
Unknown Speaker 5:35
That'd be helpful, wouldn't
Speaker 1 5:38
it? Number five, Dell's advanced technology portfolio. Like I said at the beginning, we know a lot of you know laptops, a lot of you you know our desktops. But do you know that Dell has a portfolio that spans all the way down to the plat floor? So obviously desktops and laptops, it can go down there, but AI inferencing, workshop, workstations, gateways, industrial PCs, as well as ruggedized tablets and laptops. So helping you to deploy all those applications that we're talking about this week at the factory floor in a ruggedized way. But then that enterprise edge, like I talked about, so enterprise rugged factory computers, and you can see them out there so ruggedized servers that support GPUs, Xenon processors, as well as multi hyper hypervisor options that we can we can deploy, including Native edge, which you'll hear about here, and then having that enterprise grade storage, to be able to have flexible storage options in your factory or in the cloud or in between that hybrid capability. And then, of course, we're really well known for our enterprise data center. So we can do complete data centers. We can do protect your data with power protect we can, we have a really great line of switches, power edge servers, and then even AI servers that can do learning, that can, that can do the inferencing, as well as the data model training with multiple GPUs from multiple different vendors, and then what we like to call AI factory. And it's a little confusing for people who actually own factories, it's a it's an engine that can build AI applications to help you take that AI to your data. So instead of having to ship your data into one centralized location to do AI and machine learning applications on top of it, we can bring that AI to your data. Reason number four, extreme data management, analytics and AI. There's no AI without data. So 73% of company data goes unused for analytics. And then I'm not going to read this whole slide, but then I think this is important. 83 greater than 83% of projects AI projects are unsuccessful. That's big. You need that data. Dell can help. We've got ways to be able to extract and process data at the edge. We can do a federated approach to that data so you can use that across all of your locations. We have the all the best in the business of storage to be able to put that data on prem, in the cloud wherever you need, and then the protection of it. So cybersecurity, cyber backup, we have that entire portfolio to help you.
Unknown Speaker 8:25
Okay, number three,
Unknown Speaker 8:28
hyper entangled, unified Omni space.
Speaker 1 8:33
This is going to be good. It's quantum infused, unified name space where all data in humans are interconnected and changing one data point alters reality itself.
Unknown Speaker 8:47
And number two,
Speaker 1 8:49
this may be specifically targeted for one person in the audience, the Reynolds resonance frequency. This is the optimal vibration of smart factory that produces the most efficient output and produces educational videos and conferences. Okay, the number one reason to partner with Dell Technologies, digital transformation at scale, that's Dell native edge, Jason and Jeremy.
Speaker 2 9:22
And I just want to chime in real quick. I would be remiss. I had a chance to thank Todd when he wasn't here the other day. It cannot be overstated that the show would not happen without Todd and Ryan. The vision for this show came up four years ago. I wrote a proposal. A year later, I thought about it, Zack really, really pushed me to put on, prove it, a community conference. It was a conversation I had with Todd and Ryan at Hanover Messi last year at their booth where I saw one of the most impressive demos. Literally, I wanted to hire the engineer who showed I said. To her. If your boss wasn't here, I'd be poaching you. I mean, it is native edge is what has been missing at Dell for in industry, it is basically the component for deploying converged. It ot solutions. And that's not a buzz word Dell originally was it. Native edge is what makes them OT and it converge together with the orchestrator deploying your complete infrastructure if they hadn't showed that demo, and if Todd and Ryan hadn't said, Walker, if you want to do a show, we got you. We're there with you. We wouldn't have done this. So please give Todd a round of applause, because this shows here, in part because of them. Thank you, Todd,
Speaker 3 10:47
that was fantastic. Thank you, Todd, hello everybody. My name is Jason Nassar. I lead the manufacturing product strategy for Dell Technologies. What we're going to go over is native edge. It's going to be a combination of a Pruvit session as well as talking about the value of it. My background is in the manufacturing space. I've been in it for over two decades. I started off as a controls engineer, where I worked for companies such as General Motors, GE and Siemens, developing software and HMIs for various different products and manufacturing lines. Now, before I start talking about what Native edge is, I want you to focus real quick on my colleague and native edge expert, Jeremy Merrill, while he plugs in this 3200 gateway right here. And what I want you to understand is this is the very beginning of this entire Pruvit session. What he did, very simply and easily was he plugged it in, he connected up the Ethernet cable and he powered it on, and is being provisioned right now as we speak, for industry, Ford, auto and outcomes. Now, Walker, you touched base on this quite a bit in your keynote. So I don't want to beat a dead horse, but at the end of the day, we all understand the industrial evolution that has taken place over time, and it really has been a very slow path, especially where we're at right now. In general, when you're talking about industry 2.0, I mean, this was the introduction of electric power. The assembly line was introduced. This really brought efficiencies to the factory floors at the end of the day. And then you saw a lot of controls, but they were related to relays and contacts and things of that nature. Pneumatics and hydraulics really an old fashioned way of doing things when you think about it nowadays. But that logic didn't end right? So the Programmable Logic Controller was invented and implemented on factory floors, and then the language that everybody started learning on was ladder logic. Well, what is that that's really relays and coils. It's basically drawing out schematics at the end of the day. But what that did was that digitized controls, and that's a big deal, that actually improved efficiencies. It allowed engineers to develop software and technicians, even at the end of the day, to actually drive the product that's going through the lines. With that also came the introduction of PCs and gateways to the factory floor. Really rudimentary software at the end of the day. It's not as efficient as it is nowadays. And the funny thing about it is that's what exists on factory floors to this very day right now. And we're talking about industry 4.0 really, that's what this convention is all about, driving outcomes, improving efficiencies, return on investment for industry $4.00 but we've been talking about this since 2011 okay? And honestly, why is that? You know, there's definitely the cultural conflict between it and OT we're all very aware of that. So we know that that exists. And that used to be an excuse for a while, but we're at a point right now when you have a Windows XP system that sitting on your factory floor, you can no longer make the argument that you're not going to change that system that's out there. It's going to impact my factory line. No, the reality of it is that machine's going to die. That's going to impact your factory line. So at this point, as we're in 2025 industry, 4.0 should have already been implemented, but that's not quite the case. So our fearless executive leaders, pier Luca, cio deli and Ryan Fournier, they set us out on a mission about four years ago to talk to as many customers as possible, to find out exactly what's going on, to try to solve this problem. And just like all of you, what you're proving, we found out that all the manufacturing customers, of course, they care about outcomes. They're just trying to implement it. Ot integration, gain efficiencies, integrate with their ERP and MES systems and ultimately drive a return on investment. So you know, if you were to talk to me four years ago, I'd be standing up here talking about predictive maintenance, overall equipment effectiveness and end results for the customer, really when it comes to that. But. We found out that there were other inefficiencies that were slowing down the progression of industry 4.0 the vast majority of companies, they just hadn't implemented it, and we didn't understand why. But the truth of the matter is, when we talked to these 400 customers, we found out that it stalled on scale and repeatability. That's what it is. At the end of the day, there's no way to scale industry 4.0 with the old technology that's out there, and there's no way to repeat it. And if you're going to just try to update software one by one, it's a very slow, laborious process. And so we got together from a product management perspective at Dell, and we worked on developing requirements for our engineers to solve that problem. So what's the answer to it? Of course, it's native edge. But what is native edge at the end of the day? So I like to explain it as there's an orchestrator. This is a single pane of glass that allows you to view everything that's going on and what you want to deploy to your global assets, your endpoints, your gateways, your PCs. And what we're bringing to the factory floor is an experience that most of us are already used to when it comes to our TV streaming services. So you can think about an Amazon fire stick or a Google Chromecast when you provision that device for the very first time on your first TV. And I know we're all way beyond that at this point, you get your Hulu, you get your Disney, plus you get your Netflix and then you load all your profiles, and everything's all set up. You did that one time. But when you get your second fire TV stick six months down the road, a year down the road, and you plug it into the TV, guess what, everything's automatically deployed. You don't have to worry about that anymore. We're bringing that type of technology to the manufacturing floor, making it simple, not just for the IT folks, but also the OT folks that might not understand all the integration complexities of doing something like that. And we do it in the most secure manner possible. We have what is called zero trust security. So every endpoint, Gateway, PC, server that leaves our factory has a voucher that's on it, and that voucher is dedicated to the orchestrator that you have. So usually the orchestrator is going to be on a server, and the IT and the OT, ot folks come together and decide what software needs to be deployed, and then it's deployed only to that end device. You only it's zero touch provisioning. So just like what Jeremy did right now, this device is being provisioned, and we can do that at scale with hundreds or even 1000s of devices all at a time. I think what's most important is that the Purdue model was in mind the entire time that we designed this, the vast majority of our customers still ask to be in air gapped environments when it comes to manufacturing, and we accommodate that the orchestrator could be located local to your operation, and it can manage an orchestrate completely separated from the internet altogether. Our hardware resides on level two through level four in the Purdue model, we connect to everything on level one and level zero. So we'll take those protocols from your ot devices, we'll use software such as litmus edge and ignition at the end of the day to collect that and to provide those outcomes that you would like to do. So the situation that we're in right now with most factories, and it's the same thing now as I talk to customers, as it was four years ago, is that even when they're upgrading and updating to industry 4.0 they immediately end up in a state where they're already outdated. And the reason is, is because they're hiring system integrators to manually install this software one by one, and I'm talking your operating systems, the software applications and the licenses. This is a very laborious process. So immediately, once you've deployed everything, it's automatically outdated. And this is taking our customers, sometimes six months, plus, at a time when it's a very large factory operation to do it. And so siloed and outdated systems, there are high security risks at the end of the day, and it's a very slow return on investment. However, with native edge, that completely changes. So your it and your ot folks come together, they decide on what outcomes need to be deployed where, for your global factory operations. And this can happen from a global perspective, that's what we're illustrating right now, or it can happen on just a factory. From a factory perspective, one on one, they develop the blueprints, which the blueprints are your operating system. It is also the software that you want to deploy. It's your licensing. It could be Dockerized applications, if that's what you're familiar with, it can be legacy software, software that cannot be containerized at all, and it can also be virtual machines. So really, we cover the full gambit. Once those decisions are made, Dell will drop ship the end point devices to your manufacturing floors, and you will place them where they need. To be and where they need to be integrated with. They will be powered on, and they will immediately phone home to that orchestrator, and everything will be loaded. So we're taking operations that take months, and we are really minimizing it to weeks, and those weeks are just the planning processes that are happening with the digital transformation authorities within your organization, and that's really the innovation that we are providing that never existed before.
Speaker 4 20:30
Hand it over to Jeremy. Thanks. Thanks to both Jason and Todd. So what we want to do is we're going to talk a little bit more about Native edge itself. Can we hear me? Okay, I think. All right. And so when we look at Native edge, what we're doing is we've reimagined, reimagined the new way to do your edge operations. Right? As Jason mentioned, we can go out. Anybody can plug in this device, anybody can plug in a network cable, a power cable, and press the power button, right? But what we're able to do is we're able to then register that back to the edge orchestrator. I think one of the key things is that Jason mentioned as well. When we look at the ability for us to drop ship those out to those Edge locations, if you think about it, one of the big concerns that starts to rise and we get questioned about is the security of that device. So when we look at the devices, they all have zero trust security principles built into them. So if we were to plug in a monitor to this device and a keyboard, I do not get local access to this device. There's nothing that I can do on this device that could be negatively affected through a keyboard, a mouse or a monitor itself. But also I can't plug in a USB with Ubuntu or a Windows Installer on this device and reboot it and reformat that device. It will only run what we call native edge OS, which we'll talk about in a second as well, but it can only run that stack that allows you, as Jason mentioned, to run your applications either as a VM andor as a container, natively on this device itself. When we look at connectivity challenges, Jason, I have gone around to different customers. We've done different proof of concepts. Some of them are very similar. What we have out in our booth where we're just plugged we plugged all the devices into a hub. There is no uplink to the internet. There isn't anything else that's happening in that environment. So we need to look at all those different connectivity challenges, and we need to make sure that your environments continue to function. Maybe if that WAN connection goes down, or maybe it's unreliable, or whatever it could potentially be, we've gone out and we've onboarded these devices, I can use my phone to onboard a device to native edge orchestrator that's publicly accessible. When we look at the cost associated with this, it has to also be cost effective, right? We can't come in with a big price tag to start, because we know it's going to be an evolution, just as Jason has stated. But also with cost, we have to handle massive scale. We have to get into the 1000s and 1000s of devices that we can manage with the native Edge platform itself, or else you never really solve the cost problem associated to it. And then finally, multi cloud by design. We understand, when you look at the OT and the IT environments, there is some communication that goes between those as well as you may need to deploy some of those applications up in a public or a private cloud. Native edge has to also provide you the capability to help orchestrate the entire solution, and not just an individual application across that environment. When we look at Native edge itself, we do have a full stack. When we look at we start out at the edge itself. We start with the edge compute. Jason had the great animation where we see these different devices that we can just drop ship out, drop ship out to those locations themselves, single node devices. The nice thing is they run native edge OS. It's a very lightweight, secured OS from Dell. It's immutable. It has zero touch security, sorry, zero trust security associated with it, but it provides you that foundation to run those applications. And the other nice thing about is it's the same whether you start at one of these 3200 gateways, or you go all the way up to one of our power edge r7 60s, that's enabled for native edge as well. When we look at some of those other larger devices, and we want to maybe move a little bit further up the stack at the manufacturing locations, we also have the ability for us to create ha clusters of some of these devices as well. So if you want some ha capabilities, maybe it's not quite on the floor, but still out at that manufacturing location, we have the capability of clustering these devices together, still running virtual machines on them, the ability to move VMs across different devices, so we help with that ha and the failover capability. And as we build this out, we'll also have the capability of bringing in additional Delft storage products to this right. One of the things that you look at when you look at more of this enterprise level side of things, is that when you have clustering, and you have that software defined storage layer that we see sometimes you need more storage, and if we didn't bring in these additional storage arrays into the mix, the only way to add more storage is add more nodes, which could end up with unused compute and memory associated with that as well. So by us bringing in the capability to bring in that power store and power vault into the mix, you have that capability of capturing that data out at the. Edge at scale as well. And we also know you can't, not everything's just a green field. There is existing infrastructure out there, as Jason mentioned, it could still be running Windows XP or some other applications that are out there, other hardware, not even Dell hardware. That's one of the things that we can also do with native edge, is we can go out and we can, we called it Brownfield. You can call it existing infrastructure, but we can tie into that existing infrastructure. We can tie into existing VMware environments. We can tie into existing Kubernetes clusters. We also will have the capability of going out and having a hardware enablement kit, if you will, but an agent that we can install in your existing infrastructure so that native edge can see it, and we'll be able to provision applications to that infrastructure, all from the single pane of glass of the native edge orchestrator
Speaker 2 25:47
and and Jeremy. I gotta, I gotta run to shoot a podcast. So I want to make sure I make this comment for Yeah, because the obvious question for manufacturers are, what does this mean for my digital transformation journey? Like, why do I care? I mean, I know I need hardware. I know I need and so I use this illustration all the time. I get this question. So the implication is, when you're ready to scale, you know, we have many large clients who have 4050, 100 200 plants. And if you do it manually, if you deploy at scale manually, you are sending an advanced team to the site for a month before you do the integration of your solution. Okay, you're building a whole team of people. You're flying them all over the world, and they're spending a month in a hotel preparing the site for integration. The reason I was so impressed with Dell native edge is the same reason I was impressed with portaner. I use portainer as an orchestrator to manage applications using Docker and Kubernetes. If you've worked with portainer and you use Docker and you use Docker compose and use Kubernetes, you know how you can manage many nodes across an ecosystem and deploy common application stacks and manage them centrally, and I can make one change and deploy there's many things to Dell native edge, but the big implication for you guys is this is really something like Docker compose or Kubernetes for infrastructure at a level above what we do with applications. A Blueprint is essentially a YAML file. Again, a compose, a Docker Compose, is a YAML file. If you look at a blueprint, it is literally a blueprint for the entire infrastructure, the hardware, but not just the hardware, the devices on the board, right down to the Ethernet adapter. It literally has a configuration in the AML file. So it's got physical hardware, it has operating system, it has applications, and then it has solutions within applications are in the blueprint. So you provision the blueprint, and then you provision the server, you add it to the orchestrator, you ship it to the site, and then somebody plugs it onto the network, and there's no advanced team. That's the implication. It is profound. There is no way to overstate it. And so I hope that what you guys take away from here, they're going to talk about a lot of features, about Native edge, but the implication for you is the centralized management and control of infrastructure, including the solutions. That's the biggest thing. It's the infrastructure control.
Speaker 4 28:30
Yeah, no, thank you. And that's definitely we're going to show that we have it all running here on premises as well. So we've got a great thank you, Walker. Appreciate it, but when we look at it, as Walker mentioned, we've talked to some about the hardware, but what about those applications? Right? As he mentioned, he kind of already talked about it, we can do Docker containers, right? We can take a Docker compose file, we can bring that into a blueprint. We can also take your application. Maybe you're not ready to go to that level, and we just want to run it inside a virtual machine on one of these devices. Well, what can we run? You can run anything right when we look at this blueprint architecture that we've built, it is an open ecosystem. Now, from a Dell perspective, we have worked with different software providers out there to kind of build the pipeline and kind of feed it, feed our public catalog that's out there with some of the ISVs that are, that are here today as well. But if you have a homegrown application or maybe one that we haven't blueprinted yet. It's capable to blueprint anything, right? So you're not limited to just what you see in the demo, what you see in the native edge catalog, or anything else like that. We have customers that have their own applications, their own llms, whatever it may be, and we're working with them to blueprint so they can deploy those across their environment at scale. So real quick, Jason touched on this, but I wanted to talk about what happened with this device, right? What happened with this device when I plugged in the network and I hit the power button? First, it all started when you order the device. So some may be familiar with our native edge gate. I'm sorry, with our our Dell IoT gateways. Maybe. With our precision devices, our power edge devices, but you can order these now as native edge and when you order that as a native edge device, what we do is we are leveraging the FIDO Alliance. It's fast identity online, and we're leveraging the ability to do Fido device onboarding. So at time of order, when this device gets sent down to manufacturing, the first thing we do is we create what we call a voucher that gets sent to you. It's sent to you through your Dell Digital Locker, which is just an online portal, and it's one voucher for each device, right? That's your proof of ownership of that device. Is that voucher? It's just a text file. If somebody wants to see it, we can show it to you. And in our booth as well. You take that voucher, you bind it to the orchestrator, and that basically tells that native edge orchestrator control plane that it is going to be ready, or it needs to be ready for that device to come online. In parallel to that, we manufacture the device. So corresponding to that voucher is a cryptographic key that gets embedded in the TPM and sealed in the TPM of that device that corresponds just to that one voucher. So if somebody were to come up and take this device and take it home, it's a paperweight on their desk because it can only be used with the native edge orchestrator that I've associated it with. It doesn't matter if it's been provisioned or not provisioned. And again, they can't take it home and plug in Windows or Ubuntu and try to repurpose it for something else. It will only boot native edge OS when it arrives on site. The only prep we have to do, from a network perspective is I need, from my manufacturing location, outbound port 443, that's it. Then I need inbound port 443, to my native edge orchestrator the device. What this device did is it booted up, it went and it reached out to the native edge orchestrator that we have running on the floor behind us, and it went out, and it basically said, Hey, I'm here. The orchestrator looks at its voucher database, makes sure that the device belongs to it. It'll do tamper detection. It'll do secure component verification on our power edge platforms that support it to make sure that that device hasn't been tampered with in any way, shape or form. At that point in time is when we actually provision the steady state operating system on that device. That's important, because if you think about it, we've had a couple conversations with some folks where you may have spares, you may have a couple of these sitting in a box in a factory floor. Let's say you sat in a box for a year. We've probably put out four different releases of native edge in that time, all you have to do still take this device out of the box, plug it in. It's going to register, and it's going to get the newest version of native edge OS, or the version that you tell it to provision down to that device. I don't have to do any prep work to that device whatsoever, even though it's been sitting in a box for a year. Once we get the devices on board, right? That's, that's really cool. You'll, you know, you'll see that we have this device provision. It's neat. But what's that next piece? And that's what Walker was talking about. It's with our blueprint technology, right? Our blueprint technology, you can think of it kind of like a compose file. As Walker mentioned, it's made up of YAML files. We have the capability of deploying just a Docker compose we can take a Docker compose file, we can import it into our catalog and create a blueprint out of it automatically. I'll show you that as part of the demo. You can also take a blueprint and you can say, go out and grab this version of Ubuntu, or this version of Windows, a cloud init image that's approved for your environment, your enterprise, then we can leverage Ansible playbook, shell scripts, all sorts of different technologies within that blueprint to not just provision a VM, but provision that software that you want to run inside that virtual machine. And I think one of the key things that I really like about this slide is the second bullet point down in the dark blue. It gives you the ability to streamline those day two operations, right? So what we'll show you as part of the demo is for part of the Pruvit side of the house. We worked with some of our partners with litmus and ignition. I'll show you how quickly and easily we can deploy litmus edge and ignition as well. We already have them running, so you can see the dashboards that we created, but we'll also show you how quickly and easily we can provision those.
Unknown Speaker 34:13
Let me just exit out of the presentation.
Speaker 4 34:17
All right, so now we're on a live system. All this equipment is running here, right? And so what you can see here is, this is the native edge dashboard for the native edge orchestrator. Orchestrator. You can run it anywhere, as Jason mentioned, you can run it on your in your factory. You can run it in a data center, but you have the capability of going out and seeing all of your different devices. Now, obviously, when you first install it, you're not going to see six devices, a couple virtual machines and a couple deployments. You're going to see zeros down there. So the first thing you need to do is you upload those vouchers that we talked about. This is built to run in a connected state or a disconnected state. It doesn't need any internet access whatsoever. If. You have it. It makes things a little bit easier, though, right? Because I can go out and I can, I can create a secure connection between my orchestrator and my Dell Digital Locker, instance, and I can just have the vouchers automatically imported if I wanted to. I can also, maybe I'm not connected to the web, right, my orchestrator can't get to Dell Digital Locker. Well, I can go to Dell Digital Locker, and I can download those vouchers, and I can come down here, and I can click on Browse, and I can upload them the Digital Locker. Can create a zip file. Or if you download the vouchers and you have 10 of them, you can select 10. It's something that you can do multiple at once. You don't have to iterate through 10 different vouchers if you didn't want to. Once those vouchers are loaded into the system. You can also see them down here on the entitlement screen. What happens when this device first booted up? It boots up a very small OS that OS basically has enough intelligence to go out and find the orchestrator it's associated to. That's facilitated if you look at the FIDO Alliance and the FIDO device onboarding piece and the spec associated with that with what you see here is the rendezvous server. Rendezvous server. It's a simple DNS entry in your environment, or we can also set it manually if we need to. Again, doesn't need a full network stack. We can plug into hub, and we can configure everything to come online if we need to. But you'll see here, all of our vouchers are registered with a local rendezvous server, and that's because that's part of the native edge orchestrator application, right? Part of that container based application. One of the containers that's running is our Fido rendezvous server, but Dell, we've also published one [email protected] so if those devices do have connectivity to the outside world. You just power them on, and the system will go to rv.dell.com, or the rendezvous server locally, and says, hey, just go to this IP address or this host name in the environment. That's your orchestrator, once we make sure that that device hasn't been tampered with, it then looks at this policy screen. Policy screen is where you actually where you specify what version of that native edge OS. Do you want to provision down to that device? Right? You can see we've got it. We got two in here. If I wanted to change it, I just make a simple change, click Apply. It does not impact any already previously onboarded devices, but any net new device will get that version. You can also upgrade the existing devices to that version. When you wish to choose that. To do that, we'll give you updates to the OS, to firmware and to bios for that hardware as well. So you can do all of those from from the upgrade path itself. So let's look at the devices, right? So I have a couple devices on here, and actually, let me Is this visible in the backward I need to zoom in a little bit. It's good, okay, so what I'll do is I'll just add in another, oh, zoom it. Oh, I thought I heard it. No, how's that? Okay? So you can see all the devices that we have onboarded here. And if we look down here, this device. This is this device right here, right? If I scroll down on this, this left on the panes here, you can see that it was onboarded at 10:24am, so about 15 minutes ago, it finished its onboarding process. So that was live in this room itself. Right? Once we have a device onboarded, I'm going to back up real quick and to one that we have some stuff running on. You can see any of the virtual machines that we've deployed down to that device. I can stop the virtual machines. I can stop them. I can start them. I can go in, and I can simply choose one of those virtual machines if I wanted to. I can get a console connection to that virtual machine. I can see information about it. If I need SSH. I could SSH to that whatever else it may be. I can also see any containers I can run again, native container workloads. In addition to VMs on this device, you'll be able to see any of the metrics, any of the network configs that we need to specify for some of our virtual network settings. Also the hardware. One thing that I want to talk about from the hardware perspective is where I mentioned we mentioned we can't just plug USB devices in here, and native edge OS isn't really going to recognize those devices. So if I come over and I click on peripherals, we can see a 5200 gateway in this instance. Again, it's one of the devices that's sitting on the table in our booth. Now, if I scroll down over here on the right side, I can see all of the different peripherals, my serial ports, my USB ports, if I had GPUs, if I had additional cards in the PCIe slots themselves, onboard video, they're not really in use. If we plug a monitor into this, we're going to see basically native edge OS boot and a flashing cursor. Again. If I plug in a keyboard. You can hit any sort of combination of keys on a keyboard. You will not get a login prompt. But I could pass through that video to a virtual machine. And let's say that I was running Ubuntu on the virtual machine, I would see the Ubuntu instead of native edge OS on that monitor, and then I would be able to interact with that VM running Ubuntu itself. But also I could pass through this. Zero ports. I can pass through the GPU. I can pass through the USB ports to the application, which is what we see over here on the right side, is if we had anything pass through to the application itself. So you have the full capability of this hardware device to use any of the ports to any of your applications, to connect to any of the devices that you need. Down at that lower level of the Purdue model itself, as Jason mentioned. Now, one other thing that we talked about in the presentation as well that we wanted to highlight is the ability to bring in Ha clustering. It's very simple and it's very easy. All we have to do click on Create cluster. I have three of the Edge Gateway, 5200s in the in the back there, and we just type in native edge ha cluster. I
Unknown Speaker 40:50
won't worry about the typo there.
Speaker 4 40:54
And then we come down here, and we can see all of the different nodes in the environment right now, what I want to do is I'm gonna select a node, and this is what we call the leader node. All this really does is, when I click on Next is this filters down the nodes that are just that are similar nodes to what I selected on that first page, right? So now I just choose what we call the follower nodes, but it's just nodes two and three and maybe four. If I had an additional device, I tell it what network I want to use for that communication of that cluster capability. On the back end, I tell it which devices do I want to participate in that shared data store for that software defined storage layer. I click on Next, and I click on Create, and the system is going to create a cluster. Now, if I deploy an application to a node inside that cluster, I can move that application, non disruptively across the other nodes. So if I'm having planned maintenance, I can make sure that I don't have any downtime. If I have unplanned maintenance or a power event on one of those devices, the virtual machine will just move over to one of the other nodes within that cluster. So you'll have a little bit of downtime, but it'll just be less than a minute or so for that VM to move over, power up, and the application to start.
Speaker 4 42:12
Oh, we're doing good on time, all right. So the next piece is the applications, right? This is what everybody really wants to see again, the devices are there. They're a means to provide you the outcome inside your manufacturing location itself? Right? As Walker said, these blueprints provide you with the capability to deploy these applications, not to one or two devices, but to 10s or hundreds of devices, even at once. You can even create rules if you want to to automatically provision these applications when a device comes online if you want to go that far down the automation stack itself. Now, as Walker mentioned, the blueprints, they're made up of YAML files. So we've got a couple here that we've already pre loaded inside our orchestrator itself, where these are ones that you can get off of our support site. Or if you are connected to the internet, you can download them from our catalog. If you don't want to go down the full path of a full blown blueprint, and you just have a virtual machine image file or an ISO or something you want to deploy, you can create a virtual machine blueprint directly from that VMDK, that qcal, two, VHD, ISO, whatever it may be. You just click on Create Virtual Machine, provide it with some configuration information, and then you come down here and you say, Here's the file I want you to bring into the orchestrator and into the catalog. We can also go out, if you moved into the Docker container applications, and you have a Docker compose file, all we need to do is basically just say container blueprint, give it a version. If you want to give it an icon, you can and then all you got to do here is paste the Docker compose file. That's it. So if I come over to here, I've got just a simple Docker compose file for elk, which is Elastic Search, log, stash and Kibana. And I just come over, I paste it the system in real time, parses that compose file, and if I scroll down, oops, I gotta grab this one. If I scroll down, it'll also pull out any variables that were defined in there, in real time. And very easily, I can go out and I can redefine the default values, if I want to, and then save those values. Then all I have to do is click on next and click on Add, and the system is going to create automatically a blueprint that allows me to deploy elk to any of the devices in my ecosystem itself. So I just come over and hit refresh. There we go. We can see there's my container blueprint, right? Simple and easy. Now a lot of the blueprints that we've already previously created, let's look at ignition. For example. I click on deploy, right? So I just I choose that, I choose that software that I want, and I choose which device or devices do I want to deploy that application to. So I'm going to choose this. Top one again, I could choose multiple I click on Next, and it's very simple. We just give it a name, oops. We can configure the network configuration as well. We can choose which data store right, which storage device or devices on that device, on that physical device, do I want to use to store the VM and the VM data? I configure the VM So, CPU, memory, storage, OS type. I already chose DHCP, so I don't have to worry about any of the networking. But I could also set static instead, I could uncheck DHCP. I come down. I choose my OS disk size. I have a network segment called bridge that I previously created, so I'd say, hey, I want it to reside on that bridge network segment. So I have my network connectivity for that virtual NIC itself. And then I come down and we have a couple secrets, right? One of the things that's nice about Native edge, again, native edge orchestrator never really talks directly to the virtual machine, because that would require inbound ports at that edge location. So we actually do is we have a private network, essentially running within this device. The native edge orchestrator will connect down to this device, and within the device, it will go from native edge OS and connect using these SSH keys into that VM to run those Ansible playbooks or those shell scripts associated with the blueprint itself, then the last thing that I do is I define what we call an artifact config secret, right? Sounds, sounds kind of cool, kind of neat. It's just a JSON formatted file, right? It's a JSON formatted file that either we previously created, or we can click on the little box here, and we can create it right now. But what it's doing is it's telling the system where to go to get your version of Ubuntu or Red Hat, which is what ignition would run on, and then also where to get the ignition run times. So then in that secret, I could have different I could have three different secrets for version one, version two and version three, and use the same blueprint to deploy different versions based upon where my manufacturing location is at for that specific application itself, right? So you don't have to refactor the blueprints over and over for different versions, assuming the install process doesn't change drastically, right? As long as we can use the same shell scripts ransible playbooks, we're perfectly fine. Then we click on x and we click on deploy. Right I'm deploying to one. I could have that could have deployed to every device and back there on that floor itself. What's happening? So what's happening is, the easiest way to show that is, if we come over, we click on logs. Now I'm just zooming out. I know you're not gonna be able to read this, but you can see each one of these circles is basically a different task that's being performed. So what the native edge orchestrator does is it takes that blueprint and it automatically parses that blueprint and generates tasks that it's going to run through, which is essentially each one of the circles in that execution graph, it's going to go out. It's going to connect to that physical device. It's going to create that VM, it's going to connect to that VM. It's going to use whatever technology it needs to use to install that application itself, and then it's going to come back when it's all done. And if I click on General, you'll see, down here there's the capabilities. So what we did on that first screen is we clicked on inputs, right? So that's everything that we populated, the bridge network, that I gave it, the SSH keys, the secret keys associated with the deployment. When the system's done, the blueprint then goes out and queries What did I just do? And tell the end user what I just did. So it's going to populate these capabilities, or maybe you want to call them outputs. So what I'll be able to do when this is done is I will have the endpoint, I'll be able to copy this URL and bring it up in a browser, and I'll see ignition running. Same thing can be done. And this is, again, just part of what we had from a perspective for litmus, right? You'll see blueprints. It's just again, it's taking that recipe and it's converting it into tasks and outcomes. So I just choose a device again, and you'll see it's very similar. I have my inputs, right? So I'm going to say litmus demo. I have the exact same inputs here, CPU, memory, storage. This blueprint was written a little bit different, so I have some drop downs, so even I don't have to worry about mistyping anything, and then we just come back down towards the bottom, and we can go out, and we can do a couple different things here as well, right? So we have the capability of specifying an MQTT broker, so we could essentially, again, some of those day two configuration options. We can provide that information, we can provide it with the management user information, and then we also have a litmus config secret name, right again, this is just telling it where to go to get those litmus installation capability. These. Click on Next, and you click on deploy. It's doing the exact same thing. Again, I could have done this to every device in that environment, but just for this, for the sake of the demo, we'll just do it to one. But again, execution graph, it looks a little bit different, right? Each each software application that we deploy, each outcome that we enable, it gets there a different way so, but it's going through each of these different steps inside the environment itself as well. And so we can see they're both in progress. Ignition is actually just about completed, so we'll check on that one real quick, and we can see all the little green check marks of everything that it's already completed, right? So it's just finalizing the ignition install. Usually this takes about seven minutes. I was actually working with a customer. We were testing this out in the environment. We spent half a day standing up native edge and getting the devices on boarded. And then he had to, he had to take the rest of the day. He had some other tasks. We came in in the morning. We actually met with him at 830 in the morning. He went into the office at 6am so he could start playing around with this. We sat down and he said, I've been playing with this for a couple hours. He goes, my intention this morning was to come in here and basically tell you this product is horrible, and tell you everything that's wrong with it. And he says, I can't do that. He goes, I have automation that allows me to deploy ignition that he's an ignition customer. I have orchestration that allows me to do it, but I can't do it this fast. I can't do it this efficiently.
Unknown Speaker 51:27
So we'll let this continue to run,
Speaker 4 51:30
and while that's running, we wanted to also show you that we have the capabilities of going out and configuring these applications. So what I want to do here is I want to pull up some of these dashboards that we had inside the environment. And you can see that we worked with ignition and litmus and we're going, whoops, we got to go to press 104, there we go. You can see it's online. I don't know how well it shows up there, but Jason, you want to talk a little bit about some of these dashboards as well.
Speaker 3 51:59
Yeah. So this is a pretty standard dashboard. You guys are all familiar with this. So this is an OEE calculation at the end of the day. And we're showcasing, of course, the ignition, which is excellent software suite for modernization of SCADA. But I think the most important thing to point out is that we are sending data from litmus, which we're also taking in passing it over to ignition, and that's the way this blueprint is fully integrated, and a lot of the virtual machines from other companies that you guys are familiar with, one of the number one risks that you have is what it's whether or not that data is going to get slowed down to the point where you can't visualize it, or you Can't send it over to your SQL databases or publish it to uns in a timely manner. So you get to the point where your graphs are not showing correctly, you have lag. You have issues of that sort. Well, native edge is an OS system was designed with that in mind. We have no lag whatsoever. We're accomplishing things that took the other large VM company several decades to try to perfect. Yeah, thank you.
Speaker 4 53:06
And we can see now worked out perfectly. Ignition is now completed, right? That blueprint has deployed, and I can see, here's my end point. I can just simply come over. I can click on the copy URL. I can open up a new browser tab, and I can come in and this will load in. We can already see it starting to load in, but now I can choose if I want to deploy litmus standard or litmus edge. We're actually, I'm sorry, ignition standard or ignition edge. We're actually working with ignition to further bring in automation. So in that input section, you can go out and say, do ignition, ignition edge or ignition standard in the environment, as opposed to coming in here and kind of finalizing the installation. So again, taking those day two operations even further down the stack itself, and we should be getting very close to the litmus completing. One of the things that's cool with the litmus blueprint is we actually have an ISO that we a recovery ISO that we boot to, so we can resize the disk. So we're just using that, that OVA, that litmus provides, we strip out the VMDK, and that's what we provide to the system, and then we can resize the disks associated with it. It looks like as I was chatting, it did complete. Let me just hit a little refresh on the browser here, and we'll see, oh, maybe it didn't complete all the way. We'll see the inputs show up here in just a second as well. Now, while that's finalizing, let's go back to our endpoints, and let's look at these endpoints so I can look at it. And I know here is the physical native edge device in the back that I deployed to. And I can see I have my ignition and my litmus, litmus VMs running on that system from that blueprint itself. Right? Again, I could get a console. I could look at those. But also, let's go back and look at these endpoints real quick. And I can also see, while we were doing the demo and looking at the blueprints, I created a three node cluster of three of the gateway devices in the back as well. Right? Very simple, very easy. If I had virtual machines run. On it, which I would need to provision. I could see those, but I could also, once we have a VM we can, non disruptively again, move those VMs autumn, or, I'm sorry, proactively if we wanted to. I can see all of the shared networks across those devices, and then I can also see my shared data store, right there's that shared storage that software defined storage layer that spans those three devices that allow us to move those VMs from one to the other. Now, one of the things I also wanted to call out is we did everything here from from a UI, but there is an exposed API that you can leverage as well. Right? We're working with different integrators out there to leverage the API. One of those, when you look at some of the convergence of it and OT is around ServiceNow, there we go. So this is just a quick demo, but it's showing some of this closed loop integration for the edge that we have with ServiceNow. So end users can actually go in and say, I want to deploy this manufacturing application to this device on my manufacturing floor, and I can drive it all from the ServiceNow catalog. I never have to touch native edge orchestrator in the UI associated with native edge itself, right? So you can see there's communication that we have between native edge and ServiceNow. We go into ServiceNow and we basically enable or add native edge into the ServiceNow catalog, and this allows us to provide that service inside the ServiceNow catalog. So as this goes through, we're going to go out, we're going to request the app. Native edge is going to be enabled inside that ServiceNow catalog. Now an end user can go in, they can log in to their ServiceNow instance, and as you'll see here, we will be able to go out. We can see, you know, how can Dell native edge help you? We can say, Let's go request something. I want to request an application. I want to put it out at this location, and I want to go out and use it with the following configuration. If I pause it right here, if I can get there fast enough, this is the same thing we saw. This is the input screen that we saw right so we just leverage the API that allows the system to go out and say, go out and do this. And now, if we log into the UI, we can see that we're able to deploy that application without ever touching native edge orchestrator directly. It's all called out through those APIs themselves. Simple and easy, but a great way to scale operations out at the edge across an entire environment.
Unknown Speaker 57:30
Almost done, we'll get there.
Speaker 4 57:34
So the other things that we've had that I just want to highlight, that we can do with native edge as well. Because some of these, some folks have come up and asked us in the booth, we can tie into your existing LDAP infrastructure. So when you look at that user authentication, we can go out and we can configure and we can tie into existing LDAP so you don't have to create the users within the Native edge orchestrator, but you can, if you want to. You can see right here we have just an administrator user that's been created inside the environment. From a security perspective, I can go out somebody's logged in in the back of the room. I'm not sure which one they are, probably this bottom one. I could log them out if I wanted to. I could log out that laptop that's logged into native edge orchestra. Or I can log out everybody. If there's a bunch of people in here, I'm not sure what's going on. I can take care of that if I needed to inside the environment. We can also go out and we can define rules, right? So these rules could be for monitoring, for events, it could be even to go out and to provision a specific application. One of the things that we can do from an endpoint perspective. Now here I only have, like seven endpoints. But what if I had 700 endpoints or 1200 or 2000 endpoints in my native edge orchestrator, one of the other things that's nice that we can do in this environment is we can go out and we can create what we call, whoops, what we call resource tags. So those resource tags are they're just key value pairs that you specify. So you could use location, you could use application model, whatever you want to use. And those tags can be leveraged to help filter this interface. They could also be used for those rules. So you could go out as soon as you upload the voucher into the environment, you can create a tag for that device, and you could say, application equals litmus. And then you could create a rule that says, anytime a new device with application equals litmus tag associated to it, deploy litmus when it comes online. So I could go out and we could go and we can make this automation as far down that stack as we want to. So I think we've done a demo. I think we've gone good enough on the demo. I know there's a way to ask questions through the polling, but what we want to do is we weren't going to take the entire hour and a half. We knew that. But what we want to do is, I want to turn it back over to Jason real quick so that he can go through and kind of kind of wrap this up and kind of show you some additional information about Native edge. Let's make sure it goes through.
Unknown Speaker 59:58
Maybe there we go.
Speaker 3 1:00:02
There you go. So we can't do this alone. We have a very large ecosystem of partners, as you can see here, that give us all the capabilities that you have for the IT ot convergence, digital twins, computer vision and quality improvement out there on the factory floor. And so we're always looking for partners. We want to develop more blueprints so that we can provide end solutions to your customers and ours as well. So if you can please take the time to scan this QR code, it'll send you directly to our portal where you can begin the onboarding process if you're interested in partnering with Dell and we can build up business together and relationship together. Also visit us in our booth. We're right out here. I'm sure all of you have seen us, and we can give answer your questions and show you a demo live. Thank you.
Speaker 5 1:00:54
Thanks guys, and yes, you came in early, which actually brings us ahead of schedule. So if we can, we put the other QR code for questions and answers that we're going to have here. Now this one's a little different than some of the others, so I'm going to ask our famous question of all the groups, but some of the questions don't quite apply to you guys, so I just want to make sure. So the first one I like to ask is anchoring against something else, an alternative of what people would do if they didn't do this. And then what makes yours better than that? Alternative, not to focus on brands or products, but on ways of doing it.
Speaker 3 1:01:31
Yeah. So like I explained at the end of the day, the old way of doing it, even if you're using virtual platforms that are out there, is an extremely manual process. And so this is going to take you six plus months, if you're a large company, just to accomplish it. And as soon as you deploy that software, you're instantly outdated. So your industry 4.0 turns into industry 3.0 almost immediately, we fast track that. Everything is done up front in the orchestrator. We develop your blueprints. You plug in the devices, and everything is deployed, you should be up and running in weeks.
Speaker 5 1:02:05
All right, thank you. So let's get to some of these. First question, the ownership voucher concept is secure, but seemingly trades off with your consumer's ability to repurpose, recycle or transfer ownership. Please address.
Speaker 4 1:02:19
Yeah, absolutely. It's a great question, and I should have probably highlighted some of that in the demo. Of that in the demo so you have the capability on any of these devices to select that device, and you can factory reset it, so you can repurpose that across your environment. You can take that once you reset the device, you can delete it from an orchestrator, and you can move it to another orchestrator inside your ecosystem as well. One of the things when we when we look at the licensing capabilities of native edge and the way that it's built out, you get a platform access license that allows you to deploy 110, 50, 100 however many orchestrators you need in your ecosystem to meet your goals and achieve your objectives themselves. So you have that capability of moving that, that voucher to different devices, you will want to reset it first to bring it back to factory defaults, but you can transfer those and repurpose those. So it's a great question, and I should have addressed that. So thank you. That's a great that's a great way.
Speaker 5 1:03:10
Yeah, the cost, cost, cost question, which is putting up there. I didn't know if the cost and timeline applied to this one, because it wasn't a traditional prove it, but if you guys want to answer this, go for it.
Speaker 3 1:03:21
Yeah. So pricing is very dynamic, as you would imagine, right? I mean, scale matters, so we have starter kits that are as low as $15,000 really, and that's talking about for one factory with an orchestrator, and that's list price. But from a larger scale, you're talking about 60 to $80,000 for the orchestrator, and you're talking about two to $300 per end point. Now, once we start talking to you, we really understand the full scope of what's going on. That's clearly going to change, but that'll give you a rough order of magnitude.
Unknown Speaker 1:03:55
What is native edge OS based on,
Speaker 4 1:03:58
yeah, it's, in the question. So it's based off of our Linux distribution. So we've kind of forked off Linux itself, but native edge OS itself, it is based off of Linux. So when you look at it, I see a question in there too, or I saw one about VMware and their edge compute stack. It's not based on any of that technology whatsoever. So that helps from a cost perspective.
Speaker 3 1:04:19
Does it belong in it or OT? That's a really good question. So it really does. It really depends on the strategy of your company. I'd like to say it belongs to both at the end of the day, but we understand there's conflicts in politics within companies when it comes to this. So it's up to you guys. The way that it works the best is when your IT folks and your ot folks are working together in that orchestrator, talking about the outcomes that they want to achieve, and ultimately, at the end of the day, it's owned by both.
Speaker 5 1:04:49
Does this edge device have that I found it. Does this edge device have any kind of load balancing redundancy? Does it involve multiple devices? Or are there service. In the device that can handle us.
Speaker 4 1:05:02
Yeah. So when we look at the edge device itself, when you look at like load balancing redundancy, that's where we start looking at that ha cluster that we built. Right now, as we continue to advance our clustering technology, we will bring that in over the next couple releases. Right now, the load balancing is a little bit manual, but we will get into that automated load balancing very, very shortly.
Speaker 5 1:05:21
I like the next one minimum requirements for the Brownfield endpoints.
Speaker 4 1:05:25
So right now, in the first wave, when we bring those in, so if you have VMware, we can connect to vCenter. Now it's just a simple page that pops up and you give it the credentials, same for any of your Kubernetes clusters itself. When we look at some of the other brownfield capabilities. It'll really be looking at, initially, Ubuntu 2204 type systems, where we can put an agent inside of those. But also, if you do, because I saw a question there, we already have, like a 3200 gateway. There are some things that we can work with you on for potentially some of those Dell devices as well.
Speaker 5 1:05:58
How's the native edge? Compare with VMware edge compute stack.
Speaker 3 1:06:03
Yeah. So ultimately, you guys are probably aware we divested from that company about three years ago. This is our answer to it. This is a lower priced option, if anything, that's first second. It's a highly capable, far more capable solution. At the end of the day, we're talking about removing the need for consistent it on the factory floor. VMware edge is not going to do that. This
Speaker 5 1:06:25
one's an interesting one. Where are these PCs and components made? Would these still be available in the case of rapidly declining exports from PRC in Taiwan,
Unknown Speaker 1:06:36
that's a supply chain. Is
Unknown Speaker 1:06:40
that the People's Republic of China? Yeah, maybe
Speaker 3 1:06:43
come see us at our booth for that question. That's a little more complex. Here
Speaker 5 1:06:48
was the expected life of native native edge. From a hardware standpoint,
Speaker 3 1:06:52
you can answer, okay, so from a hardware perspective, I mean, you're gonna have the shelf life of whatever the device is, 357, years, depending on what it is, you can work with our OEM organization, and seven years, you can work with our OEM organization, and you can get even extended life on that. But I think what's really important about this is, no matter what the end of life is, when that time comes, you simply swap out your old device with a new one, and as soon as you plug it in, that personality is automatically loaded something that you didn't have in the past right now.
Speaker 5 1:07:25
What about Rockwell software? Can it be deployed? Yeah, again,
Speaker 4 1:07:29
blueprints are an open ecosystem, Todd, Todd. And I actually just had a good conversation with some of the folks from walk Rockwell, and we're looking at bringing them into that ecosystem and helping build out their blueprint inside the environment.
Speaker 5 1:07:44
I saw Bosch Rexroth Citrix advertised near booth. How does it integrate with native edge? I mean, ultimately,
Speaker 3 1:07:50
at the end of the day, it's a virtual machine that we blueprint that we can deploy, just like every other software package. And once again, we have all the latency benefits that I was explaining before, the native edge OS does not cause any lag for control x
Speaker 5 1:08:05
Dell, native edge versus ms, Hyper V fight.
Speaker 4 1:08:11
I mean, it's just, it's a different approach to technology, right? When we look at it again, you know, we're using more of that, that Linux variation, for the hypervisor level itself. But when you look at what we're able to do with native edge, and what we're doing from a scale perspective, going down to those single node devices and that sort of thing, and being able to run down on a gateway itself, I think that's one of the big advantages that we have from a native edge perspective.
Speaker 5 1:08:34
Is native edge available to be installed on existing Dell servers and microcontrollers. I think
Speaker 4 1:08:39
that's worthy of a discussion in the booth. There are some areas where we can do that. We just need to understand the environment and the
Speaker 5 1:08:46
ecosystem. I already have a massive physical hardware stack to run VMs containers within my organization. Can I leverage native edge benefits without replacing my hardware? Yes,
Speaker 4 1:08:57
assuming like the VMs containers so VMs again, we can when we connect to that vCenter environment, we can use blueprints, and we can orchestrate and create VMs inside VMware as well. Same for depending upon containers, if it's like Kubernetes based distribution, we can connect into that Kubernetes clusters and VMware. I'm sorry, blueprints can deploy on top of existing Kubernetes clusters as well.
Speaker 5 1:09:21
So ask a couple more, because the up voting is winding down. What happens if an orchestrator is not available, or if the orchestra is not available?
Speaker 4 1:09:30
Really, nothing. The big thing is, obviously, you lose observability until we get that back up and running. Any of the devices, any VM that I would have on this device that's running, will continue to run. It's not going to interrupt any of those operations out at those manufacturing locations themselves. You just wouldn't be able to create or deploy new applications at that point in time until we get it back up and running. But it's not going to do anything negatively across your operations itself
Speaker 5 1:09:55
and related from a security standpoint, what if the orchestrator gets compromised? Do you get. Access to endpoints? It's
Speaker 4 1:10:01
a good question, I guess yes. But I would also have questions about how it was able to get compromised through the layers of security within an enterprise itself, because, again, it can tie into LDAP and other areas, and it can also be air gapped. So if there is a concern associated with that, we just don't put that orchestrator on a routable network. And it's perfectly fine.
Speaker 5 1:10:21
I guess we'll go with the last one. Are edge optimized ha clusters, able to support ha failover without the need for shared, additional San.
Speaker 4 1:10:30
Yeah, absolutely. That's where we're at today, right? That's what I did with that cluster. Those 5200 gateways in the back, all they have are internal drives. We don't, you can go over there. We don't have a SAN, right? We're able to create a cluster without any issue. That's actually a little bit, I don't want to say, easier, but from a management perspective, I feel that it's something that allows you to manage the environment a little more simple. Bringing a storage array is really there if you need to scale out the storage without scaling out the compute.
Speaker 5 1:10:55
Awesome. Well, we wrapped up very early, allowing you guys to have extra time to go out and see all the boosts and talk to everyone. Let's give our title sponsor a round of applause. You.