Deploy AI Apps with Intel® OpenVINO™ and Red Hat

Listen on:

Apple Podcasts      Spotify      Google Podcasts      Amazon Music
What can artificial intelligence do for your business? Well, for starters, it can transform it into a smart, intelligent, efficient, and constantly improving machine. The real question is: how? There are multiple ways organizations can improve their operations and bottom line by deploying AI apps. But it’s not always straightforward and requires skills and knowledge they often do not have.

Thankfully, companies like Red Hat and Intel® have worked hard to simplify AI development and make it more accessible to enterprises and developers.

In this podcast, we discuss: the growing importance of AI, what the journey of an AI application looks like—from development to deployment and beyond—and the technology partners and tools that make it all possible.

Our Guests: Red Hat and Intel®

Our guests this episode are Audrey Reznik, Senior Principal Software Engineer for the enterprise open-source software solution provider Red Hat, and Ryan Loney, Product Manager for OpenVINO™ Developer Tools at Intel.

Audrey is an experienced data scientist who has been in the software industry for almost 30 years. At Red Hat, she works on the OpenShift platform and focuses on helping companies deploy data science solutions in a hybrid cloud world.

Ryan has been at Intel for more than five years, where he works on open-source software and tools for deploying deep-learning inference.

Podcast Topics

Audrey and Ryan answer our questions about:

  • (2:24) The business benefits of AI and ML
  • (5:01) AI and ML use cases and adoption
  • (8:52) Challenges in deploying AI applications
  • (13:05) The recent release of OpenVINO 2022.1
  • (22:35) The AI app journey from development to deployment
  • (36:38) How to get started on your AI journey
  • (40:21) How OpenVINO can boost your AI efforts

Related Content

To learn more about AI and the latest OpenVINO release, read AI Developers Innovate with Intel® OpenVINO™ 2022.1. Keep up with the latest innovations from Intel and Red Hat, by following them on Twitter at @Inteliot and @RedHat, and on LinkedIn at Intel-Internet-of-Things and Red-Hat.

 

This podcast was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Transcript

Christina Cardoza: Hello and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Associate Editorial Director of insight.tech. And today we’re talking about deploying AI apps with experts Audrey Reznik from Red Hat and Ryan Loney from Intel®.

Welcome to the show guys.

Audrey Reznik: Thank you. It’s great to be here.

Ryan Loney: Thanks.

Christina Cardoza: So, before we get started, why don’t you both tell us a little bit about yourself and your background at your company. So Audrey I’ll start with you.

Audrey Reznik Oh, for sure. So I am a Senior Principal Software Engineer. I act in that capacity as the data scientist. I’ve been with Red Hat for close to a year and a half. Before that, I’m going to be dating myself here, I’ve spent close to 30 years in the software industry. So I’ve done front-end to back-end development. And just the last six years I’ve concentrated on data science. And one of the things that I work on with my team at Red Hat is the Red Hat OpenShift data science platform.

Christina Cardoza: Great. And Ryan?

Ryan Loney: Yep. Hi, I’m Ryan Loney. So I’m a Product Manager at Intel® for OpenVINO™toolkit. And I’ve been in this role since back in 2019 and been working in the space for—not as long as Audrey—so, less than a decade. But the OpenVINO toolkit: we are in open-source software and tools for deploying deep learning inference. So things like image, classification, object detection, natural language processing—we optimize those workloads to run efficiently on Intel® hardware whether it’s at the edge or in the cloud, or at the edge and controlled by the cloud. And that’s what we do with OpenVINO.

Christina Cardoza: Great. Thanks, Ryan. And I should mention the IoT Chat and insight.tech program as a whole are published by Intel®, so it’s great to have someone with your background and knowledge joining us today. Here at insight.tech, we have seen AI adoption just rapidly increasing and disrupting almost every industry—if not every industry.

So Ryan, I would love to hear from your perspective why AI and machine learning is becoming such a vital tool, and what are the benefits businesses are looking to get out of it?

Ryan Loney: Yeah, so I think automation in general—everything today has some intelligence embedded into it. I mean, the customers that we’re working with, they’re also taking general purpose compute, you know like an Intel® Core™ processor, and then embedding it into an X-ray machine or an ATM machine, or using it for anomaly detection on a factory floor.

And AI is being sort of integrated into every industry, whether it’s industrial, healthcare, agriculture, retail—they’re all starting to leverage the software and the algorithms for improving efficiency, improving diagnosis in healthcare. And that’s something that is just—we’re at the very beginning of this era of using automation and intelligence in applications.

And so we’re seeing a lot of companies and partners of Intel® who are starting to leverage this to really assist humans in doing their jobs, right? So if we have a technician who’s analyzing an X-ray scan or an ultrasound, that’s something where with AI we can help improve the accuracy and early detection for things like pneumothorax.

And with factories, we have partners who are building batteries and PCBs, and they’re able to use cameras to just detect if there’s something wrong, flag it, and have somebody review it. And that’s starting to happen everywhere. And with speech and NLP, this is a new area for OpenVINO, where we’ve started to optimize these workloads for speech synthesis, natural language processing.

So if you think about, you know, going to an ATM machine and having it read your bank balance back to you out loud, that’s something that today is starting to leverage AI. And so it’s really being embedded in everything that we do.

Christina Cardoza: Now, you mentioned a couple of use cases across some of the industries that we’ve been seeing, but Audrey, since you have been in this space for—as you mentioned, a couple of decades now—I would love to hear from your perspective how you’re seeing AI and ML being deployed across these various use cases. And what the recent uptake in adoption has been.

Audrey Reznik Okay, so that’s really an excellent question. First of all, when we’re looking at how AI and ML can be deployed across the industry, we kind of have to look at two scenarios.

Sometimes there’s a lot of data gravity involved where data cannot be moved off prem into the cloud. So we still see a lot of AI/ML deployed on premises. And, really, on premises there are a number of platforms that folks can use. They can create their own, but typically people are looking to a platform that will have MLOps capability.

So that means they’re looking for something that’s going to help them with the data engineering, the model development, training/testing the deployment, and then the monitoring of the model and the intelligent application that communicates with the model. Now that’s being on prem.

What people also do, they’ve taken advantage of the public cloud infrastructure that we have. So a lot of folks are also moving, if they don’t have data-gravity issues or security issues, because we do see—such as defense systems or even government—they prefer to have their data on prem. If there are no issues with that, they tend to move a lot of their MLOps creation and delivery/deployment to the cloud. So, again, they’re going to be looking for a cloud service platform that, again, is going to have MLOps available there so that they could go ahead and look at their data, curate their data. Then be able to go ahead and create models, train, test them, deploy them, and again, be able to have that capability once things are deployed to go ahead and monitor those models. Again, check for drift. If there are any issues with the models, be able to retrain those models. In both instances, what people are really looking for is something easy to use. You don’t want to put together a number of applications and services piecemeal. I mean, it can be done, but at the end of the day we’re looking for something ease of use. We really want a platform that’s easy to use for data scientists, data engineers, application developers, so that they can collaborate. And the collaboration then kind of drives some of the innovation and their ability, again, to deploy an intelligent application quickly.

And then, I should mention for everybody in IT, whether you’re on prem or in the cloud, IT has to be happy with your decision. So they have to be assured that the place that you’re working in is secure, that there’s some sort of AI governance driving your entire process. So those are on prem in the cloud, kind of the way that we’re seeing people go ahead and deploy AI/ML, and increasingly we’re seeing people use both.

So we’re having what we call a hybrid cloud situation, or hybrid platforms.

Christina Cardoza: I love all the capabilities you mentioned that people are looking for in the tools that they pick up, because AI can be such an intimidating field to get into. And, you know, it’s not as simple as just deploying an AI application or solution. There’s a lot of complexity that goes into it. And if you don’t choose the right tool or if you’re piecemealing it, like you mentioned, it can make things a lot more difficult than they need to be. So with that, Ryan, what are some of the challenges, the biggest challenges that businesses face when they’re looking to go on this AI journey and deploy AI applications in their industry and in their business?

Ryan Loney: I think Audrey brought up one of the biggest challenges, and that’s access to data. So, I mean, it’s really important. I think we should talk about it more, because when you’re thinking about training or creating a model for an intelligent application, you need a lot of data. And when you factor in HIPAA compliance and privacy laws and all of these other regulatory limitations, and of course, ethical choices that companies are making—they want to protect their customers’ privacy and they want to protect their customers. So having a secure enclave where you can get the data, train the data, you can’t necessarily send it to a public cloud, or if you do, you need to do it in a way that’s secure. And that’s something that Red Hat is offering. And that’s one of the things I’m really impressed with from Red Hat and from OpenShift, is this approach to hybrid cloud where you can have on prem, managed OpenShift. You can have—run it in a public cloud and really give the customer the ability to keep their data where they’re legally allowed to, or where they want to keep it for security and privacy concerns. And so that’s really important.

And when it comes to building these applications, training these models for deep learning, for AI, everything is really at the foundation built on top of open source tools. So we have deep learning frameworks like TensorFlow and PyTorch. We have toolkits that are provided by hardware vendors like Intel®. We have OpenVINO, OpenVINO toolkit, and there’s this need to use those tools in an environment that is safe for enterprise that has access rights and management. But at the core they’re open-source tools, and that’s what’s really impressive about what Red Hat is doing. They’re not trying to recreate something that already exists and works really well. They’re taking and adopting these open source tools, the open source Open Data Hub, and building on top of that and offering it to enterprises.

So they’re not reinventing the wheel. And I think that’s one of the challenges for many businesses that are trying to scale is they need to have this infrastructure, and they need to have a way to have auto-scaling, load-balancing infrastructure that can increase exponentially on demand when it needs to. And building out a Kubernetes environment yourself and setting it all up and maintaining that infrastructure—that’s overhead and requires DevOps engineers and IT teams. And so some of that’s really where I think Red Hat is coming into, in a really important space, to offer this managed service so that you can focus on getting the developers and the data scientists access to the tools that they would use on their own outside of the enterprise environment, and making it just as easy to use in the enterprise environment. And giving them the tools that they want, right? So they want to use the tools that are open source, that are the deep learning frameworks, and not reinventing the wheel. So I think that’s really a place where Red Hat is adding value. And I think there’s going to be a lot of growth in this space, because our customers that are deploying at scale and including devices at the edge, they’re using container orchestration, right? These orchestration platforms, you need it to manage your resources, and having a control plane in the cloud and then having nodes at the edge that you’re managing—that’s the direction that a lot of our customers are moving. And I think that’s the future.

Christina Cardoza: Great. And while we’re on the topic of tools, you’ve mentioned OpenVINO a couple of times, which is Intel®’s AI toolkit. And I know you guys recently just had one of the biggest launches since OpenVINO was first started. So can you talk a little bit about some of the changes or thought process that went into the OpenVINO 2022.1 release? And what new capabilities you guys added to really help businesses and developers take advantage of all the AI capabilities and opportunities out there.

Ryan Loney: Yeah. So this was definitely the most substantial change of feature enhancements, improvements that we’ve made in OpenVINO since we started in 2018.

It’s really driven by customer needs. And so some of the key things for OpenVINO are that we have hardware plugins, so we call them device plugins to our CPU or GPU and other accelerators that Intel® provides. And Intel®, we’ve recently launched our discrete graphics. We’ve had integrated graphics for a long time, so, GPUs that you can use for doing deep learning inference, that you can run AI workloads on these GPUs. And so some of the features that are really important to our customers that are starting to explore using these new graphics cards—which we’ve launched some of the client discrete graphics and laptops, and later this year we’re going to be releasing the data center server, edge server skews for discrete graphics—the customers need to do things like automatic batching. So when you have a GPU card deciding the correct batch size for the input for a specific model, it’s going to be a different number depending on the model and depending on the compute resources available.

So some of our GPUs have different numbers of execution units and different power ratings. So there’s different factors that would make each GPU card slightly different. And so instead of asking the developer to go out and try batch size 32 and batch size 16 and batch size 8 and try to find out what works best for their model, we’re automating some of that so that they don’t have to, and they can just automatically let OpenVINO determine the batch size for them.

And on a similar note, since we’ve started to expand to natural language processing, if you think about question answering, so if you had asked a chat bot a question like, what is my bank balance? And then you ask it a second question like, how do I open an account? Both of those questions have different sizes, right? The number of letters and number of words in the sentence—it’s a different input size. And so we have a new feature called dynamic shapes, and that’s something we introduced on our CPU plugin. So if you have a model like a BERT natural language processing model, and you have different questions coming into that model of different sizes, of different sequences, OpenVINO can handle under the hood, automatically adjusting the input. And so that’s something that’s really useful, because without that feature you have to add padding to every question to make it a fixed sequence link, and that adds overhead and it wastes resources. So that’s one feature that we’ve added to OpenVINO.

And just one additional thing I’ll mention is OpenVINO is implemented in C++ at its core. So our runtime, we have it written in C++. We have Python bindings for Python API. We have a model server for serving the models in environments like OpenShift where you want to expose a network endpoint, but that core C++ API, we’ve worked really hard to simplify it in this release so that if you take a look at our Python code, it’s really easy to read Python. And that’s why a lot of developers, data scientists, the AI community really like Python because the human readability is much better than C++ for many cases. So we’ve tried to simplify the C++ API, make it as similar as possible to Python so that developers who are moving from Python to C++—it’s very similar. It’s very easy to get that additional boost by using C++.

So those are some of the things that we changed in the 2022.1. There are several more, like adding new models, support for new operations, really expanding the number of models that we can run on Intel® GPU. And so it’s a really big release for us.

Christina Cardoza: Yeah. It sounds like a lot of work went into making AI more accessible and easier entry for developers and these businesses to start utilizing everything that it offers. And Audrey, I know when deploying intelligent applications with OpenShift, you guys also offer support with OpenVINO. So I would love to hear what your experience has been using OpenVINO and how you’re gaining more benefits from the new release. What were some of the challenges you faced before OpenVINO 2022.1 came out, and what are you guys experiencing now on the platform?

Audrey Reznik: Right. So, again, very good question. And I’m just going to lead off from where Ryan left on expanding on the virtues of OpenVINO.

First of all, you have to realize that before OpenVINO came along, a lot of the processing would have been done on hardware. So clients would have used GPU, which can be expensive. And a lot of the times when somebody is using a GPU, not all of the resources are used. And that’s just kind of a, I don’t want to say waste, but it is a waste of resources that you could probably use those resources for something else, or even have different people using that same GPU.

With the advent of OpenVINO that kind of changed the paradigm in terms of, how I can go and optimize my model or how I can do quantization.

So let’s go ahead with optimization first. Why use a GPU if you can go ahead and, say, process some video and look at that video and say, you know what? I don’t need all the different frames within this video to get an idea of what my model may be looking at. Maybe my model may be looking at a pipe in the field and we’re just, from that edge device, we’re just checking to make sure that that nothing is wrong with that pipe. It’s not broken. It’s not cracked. It’s in good shape. You don’t need to use all of those frames that you’re taking within an hour. So why not just reduce some of those frames without impacting the ability of your model to perform. That optimization feature was huge.

Besides that, with OpenVINO, as Ryan alluded to, you can just go ahead and add just a couple little snippets of code to get this benefit. That’s not having to go through the trouble of setting up a GPU. So that’s like a very quick and easy way to optimize something so that you can take the benefit of OpenVINO and not use the hardware.

The other thing is quantization. Within machine learning models, you may use a lot of numerics in your calculations. So I’m going to take the most famous number that most people know about, which is pi. It’s not really 3.14; it’s 3.14 and six or seven digits beyond that. Well, what if you don’t need that precision all the way? What if you can just be happy with just the one value that most people equate with pi, which is 3.14. In that respect, you’re also gaining a lot of benefit for your model in terms of you’re still getting the same results, but you don’t have to worry about cranking out so many digit points as you go along. And, again, for customers this is huge because, again, we’re just adding just a couple lines of code in order to use the optimization and quantization with OpenVINO. That’s so much easier than having to hook up to a GPU. I’m not saying—nothing bad about GPUs, but for some customers it’s easier. And, again, for some customers it’s also cheaper. And some people really do need to save some of that money in order to be more efficient with the funds that they could divert elsewhere in their business. So, if they don’t have to get a GPU, it’s a nice, easy way to kind of save on that hardware expense, but really get the same benefits.

Christina Cardoza: Now we’ve talked a lot about the tools and the capabilities that we can leverage in this AI-deployment journey. But I would love to give our listeners a full picture of what an AI journey really entails from end-to-end, start-to-finish. So Audrey, would you be able to walk us through that journey a little bit from development to deployment, and even beyond deployment?

Audrey Reznik: Yeah, I can definitely do that. And what I will do is I will share some slides. For those that are just listening through their radio, I’ll make sure that my description is good enough for you so that you won’t get lost. So what I’m going to be sharing with you is the Red Hat OpenShift data science platform. This is a cloud service that is available on AWS. And of course this can have hybrid components, but I’m just going to focus on the cloud services aspect. And this is a managed service offering that we have to our customers. And we’re mainly targeting our data scientists, data engineers, machine learning engineers, and of course our IT folks so that they don’t have to manage their infrastructure. So, what we want to look at in the journey, especially for MLOps, is there are a couple of things that are very important or steps.

We want to gather and prepare the data. We want to go ahead and develop the model. We want to integrate the models in application development. We wanted to do model monitoring and management. And we have to have some way of going ahead and retraining these models. These are  four or five very important steps. And at Red Hat, and again as Ryan talked about earlier, we don’t want to reinvent everything. We want to be able to use some of the services and applications that companies have already created. And a lot of open source companies have created some really fantastic applications and pieces of software that will fit each step of this MLOps journey or model life cycle.

So before I go into taking a look at all the different steps of the model I circle, I’m just going to build up this infrastructure for you to take a look at. So really this managed cloud services platform, first of all, sits on AWS, and within AWS Red Hat OpenShift has two offerings: We have Red Hat OpenShift Dedicated, or some may be familiar with Red Hat OpenShift service on Amazon Web Services, which we affectionately call Rosa.

Now, even though we have these platforms, we want to take care of any hardware acceleration that we may want. So we want to be able to include GPUs, and we have a partner with Nvidia where we use Nvidia GPUs for hardware acceleration. We also have Intel®. Intel® not only helps with that hardware aspect, but, again, we’ll point out where OpenVINO comes in a little bit later.

Over top of this basic infrastructure, we have what we call our Red Hat managed cloud services. These are going to help to take any machine learning model that’s being built all the way, again, from gathering and preparing data—where you could use something such as our streaming services for time series data—to developing a model where we have the OpenShift data service application or platform, and then to be able to deploy that model using source-to-image, and then model monitoring and management with Red Hat OpenShift API management.

Again, as I mentioned, we didn’t want to go ahead and create everything from scratch. So what we did is for each part of the model life cycle we invited various independent service vendors to come in and join this platform. So if you wanted to gather/prepare data, you could use Starburst Galaxy. Or if you didn’t want to use that, you could go back to the Red Hat offering. If you wanted to develop the model, you could use Red Hat OpenShift data science, or you could use Anaconda, which comes with prebaked models and an environment where you can go ahead and develop and train your model and so forth.

But what we also did was add in a number of customer-managed software. And this is where OpenVINO comes in. So what we have with this independent software is, again, we can go ahead and develop our model, but this time we may use Intel®’s oneAPI AI analytics toolkit. And if we wanted to, again, integrate the models in app development, we may go ahead and use something like OpenVINO, as well as we could also use something like IBM Watson.

The idea though is at the end of the day, we go ahead and we invite all these open source products into our platform so that people have choice. And what’s really important about the choice is they can pick which solution works better for them to solve the particular problem that they’re working on.

And, again, with that choice, they may see something that they haven’t used before that may actually help them innovate better, or actually make their product a lot better.

So by having this type of platform where you can go ahead and do everything that you need to ingest your data, develop, and train, and deploy your model, to bring your application engineers in to create the front-end and the REST API services for an intelligent application. And then being able to go ahead and deploy your model, and then being able to retrain it when you need it is something that makes the whole process of the MLOps a lot easier. This way you have everything, and within one consecutive platform you’re not going ahead and trying to fit things together and, I think I mentioned before, piecemeal solutions together. And at the end of the day you do have a product then that everyone on your team can use to collaborate and push something out into production a lot easier than they may have been able to do in the past.

Christina Cardoza: That’s great. Looking at this entire AI journey and the life cycle of an AI intelligent application, Ryan, I’m wondering if you can talk a little bit more about how OpenVINO works with OpenShift, and where in this journey does it come in?

Ryan Loney: Yeah. So if I could go ahead, and I’ll share my screen now and just show you what it looks like. So, if we take a look at—and for those who can’t see the screen, I’ll try my best to describe—so I’m logged into OpenShift console and this is an OpenShift cluster that’s hosted on AWS. And you can see that I’ve got the OpenVINO toolkit operator installed. And so OpenShift provides this great operator framework for us to just directly integrate OpenVINO and make it accessible through this graphical interface.

So I’ll start maybe from the deployment part at the end here, and work backwards. But Audrey mentioned deploying the models and integrating with applications. So once I have this OpenVINO operator installed, I can just create what’s called a model server. And so this is going to take my model or models that my data scientists have trained and optimized with OpenVINO and give an API endpoint that you can connect to from your applications in OpenShift.

So, again, the great thing about this is the ability to just have a graphical interface, so when I create a new instance of this model server, I can just type in the box and give it a name to describe what it’s doing. So maybe this is a product classification for retail. So maybe I’d say product classifier, and give it a name. And then it’s going to pull the image that we publish to Red Hat’s registry that has all of our dependencies for OpenVINO to run the with the Intel® software libraries baked into the image. And then if I want to do some customization, like a change where I’m pulling my model from, or do a multimodel deployment versus single, I can do that through this drop-down menu.

And the way that this deployment works is we use what’s called a model repository. So, once the data scientists and the developer have the model ready to deploy, they can just drop it into a storage bucket, into a persistent volume in OpenShift, or any pretty much any S3 compatible storage or Google Cloud storage bucket—you can just create this repository. And then every time an instance or a pod is created, it can quickly pull the model down so you can scale this up. And so basically once I click “create,” that will immediately create an instance that’s serving my model that I can scale up with something like a service mesh, using the service mesh operator and put this into production.

I’ll go backwards now. So we talked a little bit about optimizations. We also have a Jupiter notebook integration, so if you want to have some ready-to-run tutorials that show, how do I quantize my model? How do I optimize it with OpenVINO? You can do that directly in the Red Hat OpenShift data science environment, which is another operator that’s available through Red Hat. It’s a managed service, and I’ve actually already set this up and this is sort of what it looks like. And I’ll just show you the Jupiter interface. So if I wanted to learn how to quantize a model, which Audrey described, reducing the precision from FP32 to integer 8, there’s some tutorials that I can run. And I will just show that the output of this Jupiter notebook. It does some quantization of where training—it takes a few minutes to run. And you can see that the throughput goes from about 1,000 frames per second to 2,200 frames per second without any significant accuracy loss. So very minimal change in accuracy, and that’s one way to compress the model, boost the performance, and there’s several tutorials that show how to use OpenVINO and generate these models. And then once you have them, you can deploy them directly, like I was showing, through the OpenShift console and create an instance to serve those models in production. That’s what’s really great about this, is if you want to just open up a notebook, we give you the tutorials to teach you how to use the tools and at a high level. OpenVINO, when we talk about optimization, we’re talking about reducing binary size, reducing memory footprint, and reducing resource consumption. So if we want, this OpenVINO was originally focused on just the IoT space on the edge. But we’ve noticed that people care about resource consumption in the cloud just as much if not more, when you think about how much they’re spending on their cloud bill. Well, if I can go and apply some code to optimize my model, and go from processing 1,000 frames per second, which if you think about processing video, like Audrey said, 30 FPS or 15 FPS is standard video. This is going from 1,000 frames to 2,200. Being able to get this sort of for free, right? You don’t have to spend more money on expensive hardware. You don’t have to—you can process more frames per second, more video streams at the same time, and you can unlock this by using our tools.

OpenVINO also—even if you don’t want to quantize the model because you want to make sure you maintain the accuracy—you can also use our tools to change from FP32 to FP16, which is floating point 32 to floating point 16, that reduces the model size and the memory consumption, but it doesn’t impact the accuracy. And even if you just perform that step or you don’t perform quantization, we are doing some things under the hood, like operation fusion, convolutions fusion—these are all things that give you performance boost, reduce the latency, increase the throughput, but they don’t impact accuracy. And so those were some of the reasons why our customers are using OpenVINO, to squeeze out a little bit more performance and also reduce the resource consumption compared to if you just tried to deploy with the deep learning.

Christina Cardoza: Great. Listening to you guys and seeing the tools in action and the entire life cycle, it’s very clear that there is a lot that goes into deploying an AI application. And luckily, the work that Intel® and Red Hat have been doing has sort of eased the burden for businesses and developers. But, I can imagine if you’re just getting started, you’re probably trying to wrap your head around all of this and understand how you approach AI in your company, how you start an AI effort. So Audrey, I’m wondering, where is the best place to get started? How do you be successful on this AI journey?

Audrey Reznik: It’s funny that you should mention that. One of my colleagues wrote an article that the best data science environment to work on isn’t your laptop. And he was alluding to the fact that when you first start out, going ahead and creating some sort of model that will fit in intelligent app, usually what data scientists will do is they’ll put everything on their laptop. Well, why did they do that? Well, first of all, it’s very easy to access. They can load whatever they want to. They can be able to efficiently go in and know that their environment isn’t going to change because they’ve set it up, and they may have all their data connections added. That’s really wonderful for maybe development, but they’re not looking towards the future where, how do I scale something that’s on my laptop? How do I share that something that’s on my laptop? How do I upgrade? This is where you want to move to some sort of platform, whether it’s on prem or in the cloud, that’s going to allow you the ability to kind of duplicate your laptop. Okay, so Ryan was able to show that he had an OpenVINO image that had the OpenVINO libraries that were needed. It’s within Python, so it had the appropriate Python libraries and packages. He was able to create something—an ephemeral IDE that he was able to use. What he didn’t point out within that one environment was that he’d be able to use the GitHub repo very easily, so that he could check in his code and share his code.

When you have something that is a base environment that everybody’s using, it’s very easy then to take that environment and upgrade it, increase the memory, increase the CPU resources that are being used, add another managed service in. You have something that’s reproducible, and that’s key, because what you want to be able to do is take whatever you’ve created and then be able to go to deploy it successfully.

So if you’re going to start your AI journey, please go ahead and try to find a platform. I don’t care what kind it is. I know Red Hat and Intel® will kill me for saying that, but find a platform that will allow you to do some MLOps. So something that will be able to allow you to explore your data. Develop, train, and deploy your model. Be able to work with your application engineers where they could go ahead and write a UI or REST end points that could connect to your model, and something that will help you deploy your model where you can monitor, manage it for drift, or even to see if your model’s working exactly how it’s supposed to work. And then the ability to retrain. You want to be able to do all those steps very easily. I’m not going to get into GitOps pipelines and OpenShift pipelines at this point, but there has to be a way that, from the beginning to the deployment, it’s all done effortlessly and you’re not trying to use chewing gum and duct tape to put things together in order to deploy it to production.

Christina Cardoza: That’s a great point. And Ryan, I’m curious, once you get started on your AI journey, you have some proven projects behind you, how can you use OpenVINO, and how does Intel® by extension help you boost your efforts and continue down a path of a future with AI in your business and operations?

Ryan Loney: Yeah. So I’d say a good first step would be to go to openvino.ai. We have a lot of information about OpenVINO, how it’s being used by our ISVs, our partners, and customers. And then docs.openvino.ai and the “get started” section. We have a lot of information about, I know Audrey said not to do it on your laptop, but if you want to learn the OpenVINO piece, you can at least get started on your laptop and run some of our Jupiter notebooks, the same Jupiter notebooks that I was showing on the OpenShift environment. You can run those on your Red Hat Linux laptop, or your Windows laptop, and start learning about the tools and start learning about the OpenVINO piece.

But if you want to connect everything together, in the future we’re going to have—I believe we’ll have a sandbox environment that Red Hat will be providing where we can—you can log in and replicate what I was just showing on the screen.

But really to get started and learn, I would check out open vino.ai, check out docs.openvino.ai and get started. And you can start learning if you have an Intel® CPU and Linux, Windows, or Mac, and start learning about our tools.

Christina Cardoza: Great. Well, this has been a great conversation and I’m sure we could go on for another hour talking about this, but unfortunately we’re out of time. So I just want take the time and thank you both for joining us today and coming on the podcast.

Ryan Loney: Thank you.

Audrey Reznik: Yeah, thank you for having us.

Christina Cardoza: And thanks to our listeners for joining us today. If you enjoyed the podcast, please support us by subscribing and rating us on your favorite podcast app. This has been the IoT Chat. We’ll be back next time with more ideas from industry leaders at the forefront of IoT design.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Hyperconverged IT Infrastructure: A Game-Winning Play

Going to see a baseball game live is an experience. There is nothing like the smell of the hot dogs or having a food vendor toss you a bag of salted peanuts. Then there is the energy and the buzz of the crowd with every positive or negative play. Whether your team wins or loses, nothing beats the thrill of the game.

But that camaraderie and excitement is contingent on the stadium experience. Fans don’t want to waste time in long lines. They don’t want to have to track down those food vendors in the middle of the game. It is not just the team that keeps fans coming back. It’s the entire outing—from the ticket booths to the concessions stands to the digital scoreboards and even the merchandise vendors.

But this all depends on the ability to operate as smoothly and effectively as possible, which means innovating from stadium edge to data center.

It is easier said than done. Most baseball organizations still run outdated technology, have legacy equipment, and deal with multiple vendors. And they just don’t have the staff or resources to support a major transformation. The technology stack can become very complex very quickly.

For instance, one major professional baseball franchise was looking into how it could improve its stadium experience and operations on the fly. To do so, it had to find innovative ways to update its IT platforms. It found real-time data would be crucial to identify bottlenecks and new opportunities, but its infrastructure limited the ability to be agile and obtain that valuable information.

“At the end of the day, they want to make sure when people come into the stadium, they provide a top-notch user experience. And they can only do that by using real-time data and analytics to understand what the situation of the crowd is in the stadium,” says Rupesh Chakkingal, Product Management, Cloud and Compute, Cisco Systems, a networking, cloud, and cybersecurity solution provider.

IT Performance That Doesn’t Strike Out

The baseball franchise wanted to measure ingress and egress counts, wait times, and location-based engagement as it happened. For instance, it wanted to detect how many fans entered the stadium at a given time, how long it took to obtain their tickets, areas of congestion within the stadium, and at what points in the game fans started to leave.

Having access to this information would allow the organization to detect if gates became too crowded and immediately minimize wait times by deploying additional security or ticketing personnel. If concession stand lines become too long, the stadium might send out portable merchandise carts to reduce wait times.

A hyperconverged infrastructure combined with the #hybrid cloud provides the low latency, #network integration, and high-performance computing at the scale organizations are searching for. @Cisco via @insightdottech

Beyond the stadium, the organization wanted to provide play-by-plays as they happened to fans watching at home or on the go.

To achieve the cloud-like agility, performance, and security it needed, the baseball franchise worked with Cisco to deploy the hyperconverged infrastructure solution HyperFlex.

According to Chakkingal, Cisco HyperFlex enabled the IT team to break down storage, compute, and data management silos to a cluster of x86 servers. Traditionally, it would have to test certain new capabilities and wait to measure the impact. With Cisco HyperFlex, the team can analyze efforts in real time to make better-informed decisions.

All the stadium’s digital systems now run on Cisco HyperFlex with Intel® Xeon® Scalable processors and Intel® Optane technology. This allows the organization to achieve the best performance with fewer nodes and less storage. The IT team went from managing 30 legacy nodes individually to eight nodes collectively. It also was able to reduce the number of physical data racks from two to only a half of a rack, and went from 30 hypervisor hosts to only six. All managed through a simple, easy-to-use management interface.

“What used to take 12 to 18 hours to query now takes minutes,” says Chakkingal. “In fact, the results of the stadium’s queries started to come back so fast, the IT team thought something was awry with their system or queries weren’t working.”

A Home Run for Hybrid Cloud Infrastructure

The baseball franchise success story is just one example of the real business benefits Cisco HyperFlex offers. The technology is helping customers in a wide variety of markets such as financial, manufacturing, retail, healthcare, and public sector industries.

As big data, artificial intelligence, and cloud computing become core to digital business success, Chakkingal sees more and more organizations starting to redesign their current infrastructure around hybrid cloud. It provides the low latency, network integration, and high-performance computing at the scale organizations are searching for.

Chakkingal says a successful hybrid cloud strategy has three essential components:

  • Hyperconverged infrastructure to provide a cloud-like operating experience and integration with ecosystem partner services
  • Workload engine to deliver virtual machines and containers as a service with full stack observability
  • Workload management and optimization both on-prem and off-prem through a single pane of glass

“Our secret ingredient to help customers successfully adopt a hybrid cloud strategy goes well beyond Cisco HyperFlex,” he says. “We provide the entire lifecycle management capabilities for customers.”

Cisco understands customers typically work with multiple other technologies. To ensure interoperability, the company offers Validated Design Guides that provide a reference architecture on how to get started and ensure multiple third-party components work together.

Going forward, Cisco plans to offer customers even more choice and flexibility when it comes to adopting Cisco HyperFlex. The company already offers easy-to-deploy and easy-to-manage edge node configurations for quicker data response times. And it is working on a software-only version of HyperFlex on third-party vendor platforms that will enable additional edge use cases.

“There’s a growth of IoT and smart devices, which will demand a lot of processing closer to the endpoint,” says Chakkingal. “We’re providing our customers with a better user experience and faster access to the data they need now and into the future.”

 

This article was edited by Georganne Benesch, Associate Content Director for insight.tech.

The Future of Retail? Supply Chain Visibility

When you’re in retail, the supply chain can make or break your business. While waiting on shipments can hinder sales, it’s even more frustrating when you can’t locate what’s already in stock. Not only does this represent lost revenue; it wastes human resources tracking misplaced items and negatively impacts the customer experience.

Fortunately, the fix is straightforward. Retailers can implement edge-to-cloud technology that provides visibility across the product lifecycle, streamlining operations and improving customer-focused strategies. One solution is ytemfrom Mojix, a leader in item-level intelligence solutions—connecting a system of sensors to a cloud-based computing platform.

The company has deep domain expertise in technologies such as RFID, NFC, and print-based marking systems. Mojix builds business intelligence from event-triggered actions tracking billions of unique identities, following item lifecycles from source to shelf.

Edge-to-cloud technology can transform supply chain and inventory management. Retailers can know exactly how much stock they have and where it is, for greater confidence. “You have what is called a unified inventory,” says Helene de Lailhacar, Marketing Director for Mojix. “You can therefore engage into eCommerce with more freedom, efficiency, and accuracy.”

In addition to accuracy and transparency, the benefits of #edge intelligence also include brand protection through traceability. @MojixInc via @insightdottech

Retail Analytics Automates Supply Chain Management

The results can be dramatic. For example, a leading athletic gear retailer implemented the Mojix ytem SaaS platform to improve its inventory accuracy. Before deploying the system, the company contracted an external supplier to manually count items using a barcode scanner. It took 10 people at least eight hours per store. The cumbersome and costly process was completed three times a year, and resulted in an accuracy of only 75%.

After installing ytem, the company needed just two people to perform stock counts, reducing the time required from eight hours to just two hours, while increasing the counting speed from 8.3 to 125 items per minute. Because each item has a unique code, the risk of human error, such as scanning something twice, was eliminated. Inventory accuracy reached 99% and productivity grew by an astonishing 2,000%. Plus, the company was able to reduce safety stock while increasing revenue by 10%.

“When it says on a retailer’s website they have zero items left, technically they still have three or four elements in stock,” says de Lailhacar. “Retailers will not go below this because they can risk being out of inventory due to time lapses in information. That would be horrible for brand image and represent loss of business. Once accuracy is improved, they can reduce that safety stock.”

Gathering Retail Data with Edge Technology

To create ytem for retail, Mojix partnered with Zebra, manufacturer of data collection equipment, such as RFID and barcode scanners (Video 1). “It’s a marriage made in heaven because we need their data capture to get the information, and they need a SaaS platform to make their data capture useful,” says de Lailhacar.

Video 1. Edge technology and analytics help retailers track inventory movement through their business operations for better accuracy and visibility. (Source: Mojix)

The system aggregates items and contextual data from sensors. It also collects data from warehouse management and ERP systems. The information is processed on the cloud-based SaaS platform, which is available to view via an app. The technical architecture is powered by Intel® processors, which speed up the transfer—key to real-time information.

“Processing speed can be very important in stores that have a lot of items and need a lot of people informed at all times of the state of the inventory, or the location of the items,” says de Lailhacar.

In addition to accuracy and transparency, the benefits of edge intelligence also include brand protection through traceability.

“We make the item smart because we create an identity for each individual product,” says de Lailhacar. “We track the left shoe and the right shoe and can even track all the way to the leather, ensuring a pair of shoes comes from the same piece. That’s very important for luxury brands because the leather’s dye baths may result in slightly different colors.”

When you know your item individually, you can also protect sales. “For example, you can certify and authenticate it if you’re a luxury brand,” says de Lailhacar. “And you can fight the gray market because you know if someone’s taking advantage of a difference in prices on the global market. You know where that item is supposed to be sold.”

The Future of Supply Chain Management

As the marketplace evolves and more regulations are put in place, greater inventory transparency will be key for retailers to stay relevant and thrive.

“Right now, retailers are asking for transparency in the movements of their goods and transparency from their suppliers as to as where they had their products made, for child labor and other issues,” says de Lailhacar. “Laws are soon coming out in Europe that are going to make any brand accountable for their suppliers’ production methods.”

Transparency will also be increasingly important for booming secondhand markets. “Companies that are not making their brands available to the secondhand market could be missing out,” says de Lailhacar. “It’s a strategic stance. You can either buy up all of the articles that you find on the secondhand market to protect your brand, or you can decide that you will control the secondhand market. The only way to do it correctly is to be able to authenticate that those products are yours and control it.”

In a global market with so many moving parts, having a traceable, unified inventory is key to succeed in the future of retail.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Top Tools and Frameworks for AI Developers

AI is disrupting every industry. From enabling data-driven automation to power sustainable smart factories, catching production errors on the manufacturing floor, using robots for e-commerce fulfillment, and even battling wildfires—AI can be found almost everywhere.

Because of this, more and more developers are interested in pursuing or advancing their AI careers. But it can be a scary field to jump into, and many don’t know where to start. While countless number of resources are available out there, we’ve put together a list of the top AI frameworks and tools to help provide developers the necessary building blocks to get started with AI development.

Caffe: Born out of Berkeley AI Research, Caffe is a deep-learning framework designed with a focus on speed and modularity. It is built with an expressive architecture that allows developers to switch between CPU and GPU with a single flag. Its extensive code is meant to promote development. And its speed is perfect for research experiments or industrial applications that need to process millions of images a day. The project also provides developers with tutorials, installation instructions, and step-by-step examples to get started.

Keras: This popular AI framework is a neural network library written in Python. Keras prides itself on making it simple, flexible, and powerful for developers to experiment with machine learning. It reduces cognitive load, minimizes user actions, and clearly indicates error messages during development. You can take advantage of the project’s extensive documentation and developer guides to get started.

MXNet: Currently an Apache Software Foundation incubating project, MXNet is a deep-learning framework well suited for AI research, prototyping, and production. It includes a hybrid front-end that allows developers to mix symbolic and imperative programming to maximize efficiency and productivity. Other features and capabilities are scalable distributed training, support for eight language bindings, and an ecosystem of tools and libraries to extend MXNet use cases.

ONNX: As major technology companies work to make AI more accessible, ONNX makes sure developers can easily interoperate within the AI framework ecosystem. More than a framework, it is an open standard for machine learning interoperability. Developers can work in their preferred framework and inference engine, and ONNX aims to eliminate any implications downstream.

More and more #developers are interested in pursuing or advancing their #AI careers. But it can be a scary field to jump into, and many don’t know where to start: @IntelIoT via @insightdottech

PaddlePaddle: This open-source, deep-learning platform is committed to providing rich AI features for industrial use cases. It is widely adopted in manufacturing, agriculture, and enterprise applications. The platform features support for declarative and imperative programming, large-scale training, multi-terminal and multi-platform deployment, rich algorithms, and pre-training models.

PyTorch: This deep-learning research platform aims to speed up the time it takes to go from prototyping to production. The project provides two high-level features: tensor computation and deep neural networks. It was developed to be deeply integrated into Python. Developers can use it similarly to other popular Python packages such as NumPy, SciPy and scikit-learn. The framework requires minimal overhead to get started and integrates with acceleration libraries like Intel® oneMKL to maximum speed.

OpenCV: The community around this open-source computer vision library aims to make AI easy and fun to work with. The project itself provides more than 2,500 computer vision and machine learning algorithms for developers to get started. The OpenCV team also offers a number of tutorials, courses, and events designed to engage and collaborate with the AI community. Check out its latest AI trivia game show sponsored by Intel® OpenVINO.

OpenVINO: The Intel® OpenVINO Toolkit is designed for optimizing and deploying AI inference. The company just launched OpenVINO 2022.1—the largest update since the toolkit was first launched. Packed with new features , it’s designed to make AI developers’ lives easier. Key features include expanded natural language processing support, device portability, and better inference performance. Developers can get started quickly with pretrained models from Open Model Zoo. Learn more about the latest release here.

TensorFlow: This end-to-end deep learning platform developed by Google targets both beginner and expert developers. The core library is designed to help developers build and deploy machine learning models. But there are also additional libraries for JavaScript, Mobile and IoT, and production development.

For even more AI development resources, visit the Intel® 30-day AI Developer Challenge and learn how to build AI applications at your own pace. To boost your AI skills even further, consider the Intel® Edge AI Certification program.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Digital Signage Solutions: From Transportation to Retail

When you think of digital signage, you likely conjure images of eye-catching ads designed to entice consumers to make a purchase. Or maybe you think of the screens at the airport that provide you with your gate information. While using the tool for marketing and wayfinding can be effective, new applications can expand on the original vision by promoting powerful ideals like sustainability and community.

Case in point: mass transit. Dynamic digital signage solutions communicate real-time information that can increase use of public transportation, encouraging commuters to choose sustainable transportation options that promote a more carbon-neutral environment.

“One of the main things that’s deterred people from using forms of public transportation systems has been lack of real-time information,” says Jonathan Morley, CEO of Trueform Digital, a provider of digital signage solutions. “You may have a route with static printed information, but if the service has been held up for any reason, that information is out of date. People don’t like to wait. Digital signage tells potential customers when a particular service is due to arrive or depart, giving people confidence to plan their journey.”

Of course, the ability of digital signs to display advertising provides an opportunity to optimize its return on investment by generating revenue that can fund deployments. And as transportation methods evolve, digital signage can evolve, too, serving new purposes.

#DigitalSignage tells potential customers when a particular service is due to arrive or depart, giving people confidence to plan their journey. @trueformgroup via @insightdottech

From Smart City Applications to Retail Innovations

In addition to aiding travelers and commuters, smart signage and digital signs provide retailers with a way to engage shoppers. As stores embrace digital transformation, Trueform is leveraging four decades of experience with smart city applications to create sophisticated solutions that offer new opportunities for store owners and brands.

Today’s retailers find off-the-shelf digital signs to be limited in their number of features. Custom signs, such as those made by Trueform, can enhance branding, meet complex installation requirements, and provide more benefits to the viewer.

For example, London’s Westfield Shopping Centre, a world-leading retail and entertainment destination, wanted a state-of-the-art architectural design that went far beyond the typical flat-sided signs. Trueform deployed more than 170 digital advertising displays throughout the location, including unique interactive digital totems considered to be the centerpiece of the design (Figure 1).

Digital signage kiosks in a shopping mall
Figure 1. Digital signage at Westfield Shopping Centre in London provides shoppers with an interactive experience and real-time information about promotions. (Source: Trueform)

“The shopping mall wanted a particular look and feel that matches their branding and corporate identity,” says Morley. “They employed the services of a creative design agency and architect. Because of our custom design and manufacturing capabilities, we were able to supply a product to their exacting requirements, taking the concept done by a designer and making it a reality.”

One Stop Shop Digital Signage Solutions

Trueform manufactures Intel® processor-based computing systems for its digital signage solutions, including screens, cameras, kiosks, and audio units. And the company provides the digital display interface software that delivers signage content like splashy ad graphics and real-time information.

The company offers its customers a one-stop shop, including audit, analysis, specification, design, and installation. Trueform also provides lifetime maintenance, monitoring the signs it installed with on-the-ground servicing for routine and emergency response.

“In London, we have about 30,000 pieces of infrastructure that we are responsible for,” says Morley. “And we provide a four-hour repair service 365 days a year if anything happens to the sign or a piece of hardware.”

To maximize the value of its displays, Trueform also works with a variety of specialist software partners to help create a customized end-to-end solution. It works with analytics companies that collect data for business strategies as well as information sharing. For example, Trueform recently installed a digital totem infrastructure with software that counts and displays the number of cyclists on a route in real time.

“Anybody that’s driving past that sign can see the numbers per day,” says Morley. “It demonstrates that a lot of people are using these cycle lanes. It encourages more people to ride their bikes and convinced the government to invest more money on bicycle safety options.”

As technology innovations continue, Morley believes that digital signage use will expand, providing more industries opportunities to benefit.

“We know that more and more industries will be making use of digital signage and there will be more technologies that can be used in conjunction with these solutions,” he says. “At the moment, it tends to be used for advertising and that’s not a bad thing. There’s far more that can be done to provide information to the general public. We can only see that increasing dramatically.”

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

IT/OT Convergence: More Than the Sum of Its Parts

IT + OT = IoT. Not exactly. But within the world of IoT, convergence of IT and OT is a very important equation. These days, businesses can’t afford to have their IT and OT teams in separate bubbles. But in the past, these groups have operated separately, had different outlooks on how they do their work, and valued different metrics. So how to add them together? What’s the best way to arrive at more than just the sum of the parts? And what does the overall trend mean for systems integrators?

We get insights on this topic from two points of view, that of Jan Burian, Head of Manufacturing Insights for EMEA at market intelligence firm IDC, and Sunnie Weber, IoT Ecosystem Strategy Leader for Intel®. They talk about the challenges of achieving IT/OT convergence, the importance of coalitions in the convergence process, and how even environmental sustainability can be one of its outcomes. And for even more on this topic, check out IDC’s recent report: IT-OT Conversions: A Growing Opportunity for System Integrators.

Sunnie, from your perspective, what’s behind the rise of the IT/OT?

Sunnie Weber: Intel sees a really big shift in digital transformation—this connection of IT and OT. It’s no secret to anyone that IoT is extremely complex. It requires a strong convergence of technologies and people. There are distinctly different players, invested stakeholders, and mutual priorities that are being forced to merge to produce these common solutions. And so Intel sees that operationally focused solutions systems integrators play a significant role in connecting the dots between IT and OT, and so helping to bring holistic solutions to the market.

COVID has accelerated this need for convergence of IT and OT into that digital transformation. There’s high demand for improved user experience—an example being applications-focused, human-machine interfaces. The convergence of IT and OT is able to deliver that kind of value. It fulfills that need around secure infrastructure that gives enterprises the ability to make those fast decisions, increase their efficiencies, improve their resilience, and perform this unlimited scalability.

Jan, why do you think IT/OT convergence is such a growing opportunity?

Jan Burian: IT/OT convergence is definitely expanding in a post-pandemic world. But it’s not just driven by remote work; it’s also driven by various disruptions. We see that especially in supply chain—there is definitely a bigger focus on transparency and flexibility within the whole chain. Manufacturing organizations are also re-engineering their products. They are trying to embed new services to become even more resilient in terms of business, and also securing new revenue streams for the future.

These are the areas where IT and OT are both playing a crucial role. When I look at the outputs of the IDC survey, we see some classic benefits—like operational-performance improvement; like throughput and service reliability at the same or lower cost; like cost reduction in terms of sharing resources across IT/OT.

But what I also see here is something that is appearing quite a lot in the results of several different surveys—the sustainability perspective. There’s a growing importance for that because, though there are different regulations in the different parts of the world, they have pretty much the same goal: to reduce CO2. And the technology and the data from the OT environment is really something that is helping organizations to start that journey. That’s something I see as the next big trend, and also one of the biggest benefits.

Sunnie, what are some key things businesses can do to bring these two teams together?

Sunnie Weber: From an end-customer perspective you literally just need to get those CTO and COO teams in the same room, talking about their objectives and understanding the business experience and use cases that they’re ultimately trying to deliver. So one thing we strive to do is to create coalitions. The coalitions are making sure that both the IT and the OT sides are being represented—as well as the partners that are going to be part of creating the solution together: the software provider, the OEM.

Our partners are being forced to expand their working knowledge in either IT or OT—depending on their original focus—or they’re actually partnering up with some complementary company that is already an expert. That situation has maybe been seen traditionally as a little competitive, or felt like you’re giving away business; it’s actually turning into greater opportunities.

“These are the areas where #IT and #OT are playing a crucial role—like operational-performance improvement; like throughput and service reliability at the same or lower cost; like cost reduction in terms of sharing resources across IT/OT.” –Jan Burian, @IDC, via @insightdottech

One way that Intel is trying to help, especially our solutions and systems integrators, is through our partner program. And when you have this membership with Intel you can get connected very easily to Intel-validated partners through the solution marketplace, through Intel partner connect events, and through specialized matchmaking event opportunities.

And the reason that’s important is because we’re working with partners that have solutions that are vetted and really deployed out there, so we’re able to help companies connect to solid partners with which they can confidently go to market. Bottom line, I guess you could say the partners need to be willing to have those expanded partnerships so that they can come to their end customers as holistic experts. And our end customers need to start getting rid of the silo effect that has been traditional, and bridge those CTO and COO teams to have those holistic conversations.

Jan, how does an IT/OT convergence change key skills and roles?

Jan Burian: First, there’s the C-Suite: decision-makers, budget-holders, influencers. These types of managers should definitely have a better understanding of how digital technology can bring value to their company and help them to reach their KPIs. That’s crucial, because these people typically have quite a big influential power, and if you’re not able to convince them that that solution really brings the value, then it’s very hard to even get started.

Then there’s another role: a Chief Digital Officer. The typical role of a CDO is searching or looking for new technology, for new solutions, and bringing these solutions or ideas into the organization and discussing them with the stakeholders. These people should have an understanding of how to work with systems integrators. This would also be a first point of contact between the company and the systems integrators.

Then you have the IT people, who are the experts in IT security and integration of the IT systems—typically RPA or PLM. But what they really need to do is to get a better understanding of how the OT world works—what kind of protocols that could be; what the cybersecurity threats or potential issues are. And, of course, there’s also the OT group. And these people should really understand how IT works—how the data they are acquiring can then be processed in the learning steps. This is also very important. But as Sunnie already said, these are two different worlds.

And in IDC we see there’s also maybe another group: digital engineers. And they are positioned exactly between IT and OT. It’s like a converged team of the experts who are able to be a partner for the systems integrator, and are also able to be a connector between IT and OT within the company. And these people are typically managing IT/OT deployment projects. They also take care of the logic, and of the overall architecture, and, of course, the data management.

Sunnie, what can you tell us about the convergence from a systems integrator’s perspective?

Sunnie Weber: I think what this really means for the systems integrator is that there’s actually greater opportunity. To Jan’s point, they do need to educate themselves so that they are familiar with both sides of the world—and then be in a position to help the end customer merge those worlds as well. What I see the most is that enterprise customers are in a position where change is being forced on them in order to remain agile enough to stay ahead; yet they may not recognize that. And so the systems integrators are going to be that voice of reason, that voice of consultation.

Jan, what opportunities do you see ahead?

Jan Burian: Companies are always looking for new ways to improve the customer experience and to secure new business. There is a good point with the metaverse idea, for example. We probably know it from an environment like a Fortnite or Roblox, where industrial players have already stepped in and are selling or promoting their products or their brands there—I call it the civil metaverse.

But there’s also an industrial metaverse. That is more digital twin based, which is one of the key solutions when it comes to the convergence of IT and OT. For this industrial metaverse there could be a number of use cases—from simulations to testing to customer experience improvement. These digital twins should be driven by the data coming from a real environment—and this is where convergence between IT and OT is happening. I said at the beginning, the future will definitely be about convergence of IT and OT systems.

Sunnie, any closing thoughts you’d like to leave with our audience?

Sunnie Weber: Sometimes the best way to have this conversation on IT/OT convergence is to start at the end. What is the value that the end customer is looking for? Because you need to be able to help the partners and the end customers define, communicate, and deploy these value-based solutions that really inspire them and their customers to change their business outcomes. And then you can begin the evaluation of both the IT and the OT forces.

So a systems integrator can walk their customer through the conversation, and end up helping to enable those better operational models that buffer the customer from situations like COVID, allowing them to be more agile and responsive. It becomes this holistic-enablement conversation of a greater value and service at the end of the day. It provides greater value to the end customer, and it provides more business for the systems integrator.

Related Content

To learn more, listen to the podcast The Meaning of IT/OT Convergence with IDC and Intel® and read IT-OT Convergence: A Growing Opportunity for System Integrators. For the latest innovations from Intel and IDC, follow them on Twitter at @IDC and @Inteliot or on LinkedIn at IDC and Intel-Internet-of-Things.

 

This article was edited by Erin Noble, copy editor.

Bringing Edge AI to Healthcare IoT Applications

The transition of AI from hospital labs and operating rooms to initial points of care in the field is the next big step for healthcare IoT applications.

Consider an ambulance outfitted with a mobile rugged edge computer. With the right balance of performance and power efficiency, first responders could feed outputs from instruments like portable ultrasounds directly into the computer, where edge AI algorithms analyze the scans for irregularities. Those inferences would then be transmitted wirelessly to hospital physicians while the ambulance is en route, saving valuable time upon arrival that could change patient outcomes for the better.

Equipment like this can unlock a greenfield of opportunities that enhance patient care in countless edge environments ranging from doctors’ offices to rural clinics to pop-up disaster relief efforts.

Traditionally, high-end medical imaging machines­—deployed in major healthcare facilities—analyze a great deal of sensor data. These machines require a lot of computing capability that makes them physically large, very heavy, and power-hungry. They are also quite costly.

Today, most edge devices have limited edge processing capability, which requires transmission of data through the cloud to a data center for analysis. The answer is then transmitted back to the edge device. This process incurs latency and requires a reliable connection that is not always available. It is also impractical and expensive to send large amounts of data in this way. Edge medical devices therefore need their own local compute power, often assisted by AI technology.

These two technical problems are solved by a new generation of microprocessor with expanded raw computation capability, execution efficiency, and wide data movement that is necessary to shrink machine size and lower power draw as a prerequisite for wider deployment.

Rodney Feldman, VP of Business Development and Marketing at IoT solution developer SECO USA, explains: “The traditional edge intelligence computing model doesn’t work for edge medical imaging devices. Transmission of large amounts of sensor data over potentially unreliable communications channels puts patients at risk. And the development of such a distributed processing system is too complex and long. It requires careful segmentation of algorithms, separate implementation and testing of both the edge device and cloud software, and then finally exhaustive testing of the entire system. The solution is to implement as much intelligence as possible at the edge and minimize transmission of data.”

The transition of #AI from hospital labs and operating rooms to initial points of care in the field is the next big step for #healthcare #IoT applications. @SECO_spa via @insightdottech

A Faster Path to Edge AI

Medical use cases are a great example of why IoT developers are turning to off-the-shelf, high-performance embedded computing (HPEC) solutions, with enough computing horsepower and efficiency to move cloud capabilities to the edge. But to provide the same level of service as the cloud, these solutions must also include high-speed I/O to ingest multiple Gigabytes of data per second from high end devices like ultrasound probes and other medical imagers.

Today, medical OEMs and systems integrators can source these features from platforms built on 12th Gen Intel® Core processors (formerly known as “Alder Lake”).

These new processors employ a heterogeneous compute architecture with up to 14 Performance- and Efficiency-cores, and 96 Intel® Iris® Xe Graphics execution units. In use cases like a rugged medical edge server, an integrated intelligent, low-latency hardware scheduler routes complex AI workloads to the Performance-cores and graphics units, while less-intensive system management tasks are sent to Efficiency-cores.

On the data acquisition front, 12th Gen Intel® Core desktop processors represent the first time PCIe 5.0 interfaces are available. With support for 32 GigaTransfers per second (GTps) data transfers, the processors’ x16 PCIe 5.0 connectivity provides ample bandwidth for ultra-high-speed, high-resolution sensor data acquisition from diagnostic and other equipment.

Robust security and advanced virtualization technology are also crucial in these systems, especially considering the nature of medical applications. They must not only ensure critical operations are executed reliably and deterministically but also protect sensitive patient data from leakage or exposure.

All this combines to support multiple demanding applications—like these medical application examples—on the same, integrated edge HPEC platform.

“The performance and level of integration of disparate but complementary technologies in these processors are allowing new applications to be deployed more fully at the edge than before. They’re pushing more intelligence with less hardware, which of course means less size, weight, power, and cost,” Feldman says.

Onto a Board and Into the Field

Despite the processors’ efficiency, developing a compact, rugged edge server comes with serious thermal and electromagnetic interference (EMI) design implications. And the more advanced the processor, the more pins it typically has, the faster and noisier its signals become, and the more power it consumes.

Recognizing these potential challenges and the trend toward HPEC platform deployment, the PCI Industrial Computer Manufacturer’s Group (PICMG) released the COM-HPC computer-on-module standard. Like other COMs, COM-HPC leverages a two-board architecture consisting of a processor module and carrier board, but unlike others it was designed to support high-speed interfaces like PCIe Gen 5 and 25 Gbps Ethernet, processors up to 150 W, and includes two 400-pin connectors which enable a wealth of connectivity.

“More pins and power envelope in a designed and validated module,” Feldman says when speaking to the biggest advantages of COM-HPC. “One of the big things is just being able to utilize the high-speed interfaces through the COM-HPC connector. The development of circuitry utilizing interfaces like PCIe 5 and USB 4, for example, and high speed processors like the 12th Gen Intel Core requires high specialized knowledge of signal and power integrity, and how to apply it to circuit board design. Using a COM-HPC module eliminates the need to design the core computing platform.”

SECO’s Orion solution, a COM-HPC client size A module with a 12th Gen Intel Core H-Series mobile processor, is available off the shelf. But the company also designs and manufactures custom COM-HPC modules, carrier boards, and other solutions based on 12th Gen Intel Core S-series desktop processors that accelerate time to market and minimize risk.

And SECO even has a vertically oriented application software group, staffed by expert algorithm developers and data scientists, that can help get edge AI systems further off the ground.

Redefining the Medical Edge (and Cloud)

It’s been apparent since the early days that IoT applications would demand much more distributed intelligence than was in place at the time. At that point, many of the concerns were related to minimizing the amount of data that was captured and transmitted across a network, but increasingly the reasons have changed. Now it’s about capitalizing on what that distributed intelligence can enable.

In edge AI servers powered by technologies like 12th Gen Intel Core processors and COM-HPC modules, medical OEMs and integrators can consolidate what used to be multiple processors into smaller, cooler, lower-power, lighter weight, and less expensive equipment.

“Once you do that, you can push more diagnostic equipment into the field,” Feldman says. “A powerful, rugged system can deploy in an ambulance, and you have more autonomy for first responders to make quicker decisions and take immediate actions.”

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

AI-Powered Smart Buildings Get Schools Back in Session

How do organizations bring people back to the workplace in a safe and comfortable way, while at the same time maintaining profitability? And how do school districts bring students and teachers back into the classroom while meeting safety regulations to protect their well-being?

There’s no one-size-fits-all solution, so understanding some of the priorities and methods is an important step. After almost two years, returning to in-person activities is a top priority for educators, but it may not be an easy process. The good news is that innovative technologies and solutions can help aid facilities in the planning and execution of their in-person return strategies.

One example is Presentation High School, a private school in San Jose, California, the heart of Silicon Valley. After pausing on-campus learning in 2020, administrators sought a re-entry solution for the 2021 school year.

Their goal was to control and monitor access and student flow without making it overly intrusive. The requirements were clear:

  • Ensuring staff and student safety by controlling the number of individuals in their facilities
  • Quick entry/exit screening for large groups
  • Maintaining a log of who entered different facilities, buildings, rooms, and when
  • A human-operated solution or kiosk
  • Easy access to critical data, especially for contact tracing
  • Ease of use for the entire staff

.@OnLogic and @ThingLogix worked together in developing #Workwatch, a pandemic management solution that has been instrumental in bringing schools, businesses, cities, and other organizations back together in-person. via @insightdottech

Developing Smart-Building Technology Through Collaboration

The administration turned to OnLogic, a global industrial PC manufacturer, and its partner ThingLogix, an IoT low-code software solutions provider.

The companies worked together in developing Workwatch, a pandemic management solution that has been instrumental in bringing schools, businesses, cities, and other organizations back together in-person. The successful collaboration between the two companies came about thanks to the AWS partnership program in which OnLogic and ThingLogix are Advanced IoT Technology partners. And the AWS cloud platform is an essential element of the solution.

ThingLogix played a big role in helping Presentation High determine business and technical implementation of the project. And that implementation started just three weeks after their first meeting.

“They had to bring people back safely, and there’s all kinds of emotion and challenges in making a change like that,” says Brett Mancini, Vice President of Sales for OnLogic. “And when you augment efforts with technology, you can do so more quickly and more effectively. It also helps to instill confidence in the staff that they can return with minimal exposure while supporting their students.”

AI-Powered Edge-to-Cloud Health Screening

Workwatch ticked off all the requirements Presentation High had on its list.

“They saw the opportunity to use Workwatch right off the bat, and they created some of their processes around it,” says Mancini. “They use it for tasks such as distributing health survey responses and screening temperatures when people are coming into the building. Rapid screening is important to avoid a line of 100 kids waiting outside the door close to each other.”

Workwatch is an artificial intelligence-powered platform that runs on the OnLogic Helix 500, an Intel® processor-based, industrial-class edge PC. It connects all of the required edge devices—cameras, thermal imaging and other sensors, RFID tags, and Bluetooth—to capture the state of physical locations.

ThingLogix developed the Workwatch software, which handles aggregation and analysis to channel essential information to the AWS cloud for further analysis and future reference. For example, the software reads and analyzes rules-based biometric health screening data, providing immediate feedback. It can flag whether an individual should be allowed or denied entry based on thermal imaging temperature readings.

“You can give folks badges and then with AI and machine vision quickly identify where and when people are in particular areas,” says Mancini. “You’ve got IoT sensors, perhaps recording temperatures, and the use of PPE as they go through different areas where it might be required. This location and time-period data makes contact tracing possible in real time, with fewer errors than if humans had to do all of that monitoring manually.”

Beyond Pandemic Recovery

Looking forward, Presentation High School administrators see a long-term need to know on any given day who’s on campus and where they are.

The school has learned that beyond COVID-19, implementing work task management is a huge benefit. And Workwatch is making it possible. For example, the solution can be leveraged to better understand resource and staffing needs. Who knows what the future of in-classroom and remote education will be, but we do know schools need to be prepared.

Pandemic recovery is just one area where the Workwatch platform can play a role.

“This is one of those things that you could put in place now to be more prepared for a variety of use cases. You can customize it for everyday applications, not just related to the pandemic,” says Mancini. “As additional sensors become available, the uses for Workwatch will continue to grow. Meanwhile, people can feel confident about coming back to work, school, and play.”

The Meaning of IT/OT Convergence with IDC and Intel®

Jan Burian & Sunnie Weber

[podcast player]

As the world becomes more connected, businesses no longer can afford to have their IT and OT teams operate as separate islands. They need to collaborate and communicate to adapt and respond to their ever-changing business needs. But how do you merge these two separate worlds?

In this podcast, we explore how to break down IT and OT silos, the biggest business benefits, and the new opportunities IT/OT convergence creates for systems integrators.

Our Guests: IDC and Intel®

Our guests this episode are: Jan Burian, Head of Manufacturing Insights for EMEA at market intelligence firm IDC, and Sunnie Weber, IoT Ecosystem Strategy Leader for Intel®.

At IDC, Jan focuses on Industry 4.0, digital transformation, and IT in manufacturing environments. Prior to joining the firm, he worked as a consultant for EY and Deloitte in the manufacturing and supply chain space.

Sunnie has worked in the world of IoT for more than eight years through sales and partner enablement, sales operations, and channel scale design. In her current role, she works to simplify the complexity of connectivity in the IoT ecosystem.

Podcast Topics

Jan and Sunnie answer our questions about:

  • (4:39) The importance of IT/OT convergence
  • (7:35) New businesses opportunities stemming from this convergence
  • (12:38) What businesses can do to bring people and platforms together
  • (17:13) How the convergence of IT and OT changes key skills, roles, and responsibilities
  • (22:56) Key considerations for systems integrators
  • (25:40) How IT/OT convergence will play a role in the metaverse
  • (29:20) The best way to approach IT/OT convergence in your organization

Related Content

To learn more, read IT-OT Convergence: A Growing Opportunity for System Integrators. For the latest innovations from Intel and IDC, follow them on Twitter at @IDC and @Inteliot or on LinkedIn at IDC and Intel-Internet-of-Things.

 

This podcast was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

 

Apple Podcasts  Spotify  Google Podcasts  

Transcript

Kenton Williston: Welcome to the IoT Chat, where we explore the trends that matter for consultants, systems integrators, and enterprises. I’m Kenton Williston, the Editor-in-Chief of insight.tech. Every episode we talk to a leading expert about the latest developments in the Internet of Things. Today I’m talking about IT/OT convergence.

As the manufacturing sector becomes more connected, businesses just can’t afford to have their IT and OT teams operating as separate islands. But what’s the best way to bring these two teams together? And what does this trend mean for systems integrators? Here to talk more about this is Jan Burian from IDC, and Sunnie Weber from Intel.

Thank you so much for joining us today.

Jan Burian: Hello, thank you for having me.

Kenton Williston: Tell me about your role at IDC and what brought you to the company?

Jan Burian: My current position is a manager, or I’m leading a manufacturing insights team in IDC for EMEA. I’m based in Prague, Czech Republic. And I’m with IDC for about two and a half years, and the role is just like… Besides, let’s say, leading team, I’m also leading the practice which is called Future of Operations, which is very IT/OT convergence–driven. And before joining IDC, I was working for EY and Deloitte for 11 years as a consultant. And I was in charge of performance improvement in the manufacturing and supply chain here in the center of Eastern Europe. But I also used to travel a lot, sometimes in Asia, in Western Europe, and I was, at the very beginning, at Industry 4.0 area.

That was 11 years as a consultant, and prior to that I was working in the factories. These were the suppliers for the automotive industry and these were both based in Czech Republic, or located in Czech Republic but owned by German enterprises. My role there was focusing on the quality management and owner project management. So I was responsible for ramping up the new production—the new parts within the production environment.

Kenton Williston: That’s really cool. I didn’t know you were in Prague. One of my best friends is from the Czech Republic, although not from Prague, he’s from—out in the middle of nowhere. I don’t even know if there’s a town of any meaningful name nearby, but he’s very much a country boy. That’s good to meet a city boy from the Czech Republic. So, Sunnie, tell me about your role, and what you’ve been up to at Intel.

Sunnie Weber: Sure, thanks. And thanks for having me as well. I’ve been fortunate to work in the world of IoT for the last eight-plus years now. And I’ve worked in sales and partner enablement, sales operations, and channel scale design before being able to focus on setting up partner programs for our edge partners, the operational-technology solution and systems integrator. These are those domain-expert integrators who consult and recommend solution hardware and software components, and provide end users that custom solution-deployment integration and those maintenance services. So, over the last two years I’ve spent time focusing on that just-right value exchange for this partner type, and trying to understand and implement the programmatic ways for Intel to be able to support them. But just recently I moved into a broader role in IoT as the ecosystem-partner strategy lead.

And I’m really excited to be able to take everything that I’ve learned and build a bridge across the ecosystem, helping our partners cross over from get- to go-to-market with a goal of getting everyone faster time to market and more service opportunities. So, really connecting our ecosystem is a highlight of our partner programs value exchange, and that’s definitely the focus that I want to be able to bring to the table.

Kenton Williston: Yeah, that’s great. So, a couple quick things that come to mind from hearing both of your backgrounds. First of all, I absolutely should mention that this podcast and the greater insight.tech program as a whole are published by Intel. So, good to talk to a fellow Intel person here, and also everything you’re saying about how important the role of systems integrators is, and how central this concept of IT/OT convergence has become, really are playing out a lot on the articles that we’re publishing on insight.tech. So, definitely encourage all of the folks listening today to go check out all the articles we have over there, because there’s lots of really, really interesting stuff happening. Sonny, why don’t you tell me, from your perspective, what is behind this IT/OT convergence becoming such a big thing?

Sunnie Weber: Actually, Intel sees a really big shift in digital transformation—this connection of IT and OT. It’s no secret to anyone that IoT is extremely complex. It requires a strong convergence of technologies and people. I like to refer to the Merriam-Webster dictionary definition of “convergence”: it’s the merging of distinct technologies, industries, or devices into a unified whole. And that’s exactly what I see happening with IT and OT. There’s distinctly different players, invested stakeholders, and mutual priorities that are being forced to merge and produce these common solutions. And so Intel sees a significant play that operationally focused solutions systems integrators play in connecting the dots between IT and OT, and helping to bring holistic solutions to the market. So that’s why, as I mentioned before, we created these programs to support these partner types in the IoT ecosystem value chain to enable them to deploy faster, offer improved services, and really ultimately grow that business at the edge.

And I think, Jan, you’ll be familiar with this, but some of the supporting research that IDC did in the 2021 IoT spending report on IoT and edge—the global IoT market size in 2020 was posted as $309 billion. And despite the significant effects of COVID that happened after that, the market is actually projected to still grow to $1.8 trillion in 2028. So, in fact, COVID impacts have accelerated this need for convergence of IT and OT into that digital transformation, and it’s now a leading concern for the enterprise, who essentially is captive audience now. So, what we’re seeing is that there’s high demand for improved user experience with, an example being, applications-focused human-machine interfaces. So IT and OT converged is able to deliver that kind of value. It fulfills that need around secure infrastructure that gives the enterprises the ability to make those fast decisions, increase their efficiencies, improve their resilience, and perform this unlimited scalability. And this demand is what I think is very telling and directly tied to that IoT and digital transformation.

Kenton Williston: Yeah, and you mentioned a report, and we actually are hosting on our site right now this really great, very detailed report from IDC on the topic of IT/OT convergence. So if you’re looking forward there, the title is “IT-OT Conversions: A Growing Opportunity for System Integrators.” So, Jan, I’d love to hear some more details of what you saw in that report in terms of why is this such a growing opportunity, and what are the business benefits that are driving so many companies to look into this?

Jan Burian: That IT/OT convergence would be definitely, or the word of IT/OT, would be expanding after the pandemic, in post-pandemic world. But that’s not just driven by the remote work and all the stuff like the service or the focus on our services and so on, but it’s also driven by the different disruptions. So, especially what we see in supply chain—all these, let’s say, the problems with the containers or with the transparency of the whole chain, and also from another angle that’s about the growing or rising prices of the commodities, of the raw materials and components and so on. So there is a definitely the bigger focus on transparency and flexibility within the whole chain, and also the manufacturing organizations—they are re-engineering their products. They are trying to embed the new services to become even more resilient in terms of business and securing the new revenue streams for the future.

These are the area where IT/OT—these both are playing the crucial role. This is framing the situation. When it comes to the benefits, I just look into the outputs of the IDC survey we just run recently, and we see some, let’s say, classic benefits, like operational-performance improvement, like a throughput and service reliability at the same or lower cost, for example, or as a cost reduction in terms of ability to share the resources across IT/OT, that’s improvement in customer service. I mean, personally, what I see here is also one of the—I don’t want to say a new benefit, but something which is now appearing quite a lot in the results of several different surveys, is that sustainability perspective—that IT/OT could be seen or understood as the enabler of the CO2 footprint reduction, for example.

This is something which is going to get, I would say, not just like a big attraction, but also there’s a really growing importance of that because there are different regulations in the different parts of the world, but with pretty much the same goal to reduce the CO2, and the technology and the data from the OT environment is really something which is helping the organizations to start their journey. Sustainability, definitely—that’s something where I see as the next big trend and also one of the biggest benefits. And maybe let me share also one quite important experience.

I mean, typically we see these benefits could be like an OEE, or could be waste reduction, whatever, by 5 to maybe 10 percentage points, which is good. But what’s very important is also to have ROI or Return on Investment, within, let’s say, boundaries of one or two years and to be able to reach this target, one or two years in ROI—this is about the broader connection or integration of the systems. This is not definitely about the pilots or about isolated solutions, but this is about the ability to leverage the whole ecosystem of solutions within the organizations. So we’re talking always about the ability to scale. This is very true when it comes to the building a solution with the one year ROI. But this is also, I would say, one of the most mentioned barriers when it comes to the IT/OT integration in real life.

Kenton Williston: Interesting. So there are a couple key points there I think are worth digging into deeper. One is the issue of sustainability, and I absolutely agree that that is going to become just increasingly important as we go forward. I mean, it’s already a big, big topic, and I think not only will companies desire to be more sustainable, but they’ll be required to be more sustainable over time. So I think this is a very important criteria for everyone to look at. And the other thing that you mentioned here at the end of your very good points was the challenges to actually achieving this IT/OT convergence, and there’s a lot of factors at play there, not least of which is that, historically, these groups have been totally separate from one another, have very different outlooks on how they do their work, and what metrics are important to them.

So, for example, on the operations side very often it’s crucially important to maximize up time. You’ve got to keep the factories running, the containers are being shipped, as we were just talking about. That can be challenging sometimes, so more important than ever. And on the IT side, on the other hand, it’s been more about trying to innovate and keep up with all sorts of new technologies and rapidly deploying things. There’s a very different mindset between these two groups and of course, historically, the technologies they have used have been quite different as well. So, Sonny, what do you see as being some of the key things businesses can do to bring these two teams together?

Sunnie Weber: I think it actually can depend on the perspective. So, from an end customer perspective you just literally need to get those CTO and COO teams in the same room, talking about what their objectives are and understanding the business experience and the use case that they’re trying to ultimately deliver—that’s just from the core side. But really what you see for partners is that they’re the ones—the systems solution integrators, Intel on our side, our sellers—we’re the ones that have to help our end customers start having those discussions. We need to ask the right questions to get our end customers to be thinking that way as well. So one thing we strive to do is create coalitions. The coalitions are making sure that you’re representing both the IT and the OT side, as well as the partners that need to be involved in this conversation who are going to be the ones that are part of creating the solution together—the software provider, the OEM—who at the table needs to be together.

So, for our partners, another thing that’s interesting, just in addition to getting that correct assessment down with the end customer, our partners are actually being forced to either expand their working knowledge in either the IT or the OT depending on their original focus, or they’re actually partnering up with some complementary partners who are already experts. Well, that has maybe traditionally been seen as a little competitive, or feeling like you’re giving away business; it’s actually turning into greater opportunities. So, one example is one of our larger NSIs. They saw some tremendous value in business growth by partnering with one of our OTSIs, and now they’ve grown a huge pipeline together. So while they were traditionally maybe a competitive relationship, they’re now going to business together and excelling. So one way that Intel is trying to help, especially our solution and systems integrators, is through our Intel partner association membership.

The unique opportunity is understanding and having relationships with partners all across the ecosystem. And when you have the membership with Intel, you can get connected very easily to Intel validated partners through the solution marketplace, through Intel partner connect events, and through specialized matchmaking event opportunities that we’re starting to have regionally. And the reason that’s important is because we’re working with partners who have solutions that are vetted and really deployed out there. So we’re able to help partners connect to solid partners that they can go to market with with confidence. So, bottom line, in summary, I guess you could say the partners need to be willing to have those partnerships expand so that they can come to their end customers as holistic experts. And our end customers need to start merging and having those—remove the siloed effect that has been traditionally known, and bridge those CTO and COO teams to have those holistic conversations.

Kenton Williston: Got it. Now you’re going to have to help me with a little decryption. Is NSI a network systems integrator?

Sunnie Weber: Actually, they’re the National System Integrators. So they tend to be the larger systems integrators. A lot of times they will partner up with the smaller, more regionally focused solution integrators or systems integrators on the operations side. So maybe they’re more on the design side, and the operations-technology experts are doing the physical integration onsite.

Kenton Williston: Yeah, that totally makes sense. And that’s something we’ve talked about a lot on the insight.tech program as well, is how there are all these niche markets where the local SI is really going to understand their customer extremely well in a way that a larger SI can’t do. But, conversely, the larger national SI will have technical capabilities and a breadth and scope of expertise that really is important in bringing these very different groups together. And so it is very much a complementary match. Totally agree with you there. So, what I am wondering about at the same time, and, Jan, maybe you can speak to this, is you do need a certain set of skills to be successful in pursuing these relationships and helping your end customers. So, Jan, what do you see in terms of being some of the key skills and roles and responsibilities that might be changing in the interest of putting this IT/OT convergence forward?

Jan Burian: Firstly, let me draw the typical structure or the different groups within the company, within the manufacturing organization. We have a C-Suite, so, decision makers, budget holders, influencers. So these type of managers, they definitely should be having better understanding of what or how digital technology could help to improve their KPIs. How digital technology could bring the value to their company, how this could be helping them to reach their KPIs. So, that’s very crucial, because these people, typically they have quite a big influential power, and if you’re not able to convince them that that solution really brings the value, then it’s very hard to just get there.

That’s the first group of the people within the typical manufacturing organization. Then there’s another group. This other—maybe let’s start with a Chief Digital Officer and people around this person. And I would say typical role of CDO is searching or looking for the new technology, for new solutions, and bringing these solutions or ideas into the organization and discussing with the stakeholders, or with the owners of the processes, with line of business leaders about how this solution could help them to improve what they do.

These people, they should—it’s not just about like a detailed understanding of these solutions, but they also should be having the understanding of—I mean, how to work, for example, with the systems integrators. This would be also like a first point of contact between the company and the systems integrators. They really need to understand what’s possible on a market. Technically you can buy anything, but is the ROI really like one or two years, or is a solution—could it be scaled within, I don’t know, a short term period? And also does the solution comply with the long-term company strategy? That’s also extremely important. So the people around, or the team or CEO, should be really getting that deep understanding of technology, but also of the implementation deployment process. Then you have, let’s say, another group—these are the IT people.

And there’s no doubt that these are the experts in IT security and in the, let’s say, integration of these IT systems—typically RPA, PLM, whatever. So, supply chain–management systems. But what they really need to do is to get a better understanding of how the OT world works, what kind of protocols that could be. I mean, what’s the cybersecurity threats or potential issues that might be happening? So that’s another group. And, by the way, before I get to the OT people, let me share one thing or one thought that a lot of people see IT-OT integration more from the, let’s say, data perspective. So you’ve got data generated on the edge, then they’re being transferred to the cloud or to the on-premise IT systems, and then will be analyzed then—I don’t know what we can do, but a thousand different things with that.

But that’s one perspective. The other perspective is that automation—I mean IT data could be triggering different situations, or this could be controlling the production lines. There could be communication between IT layer and PLC, and PLC is operating, controlling, driving the production line. So there is, let’s say, two-way flow of the data. Also the IT people should understand the logic of this, because if IT won’t work properly, then the production line could collapse. If it’s just, like, about data, getting the data from a line to the system—I mean, sometimes it’s not vital for the systems, but if it goes other way around, this could end up with, like, a catastrophe in production. And of course there’s also the group of the OT. As Sonny already said, these are two different worlds.

So these people should really understand how the IT works. How they could leverage—how the data they are acquiring, providing, could be then processed in the learning steps. This is also very important. And in IDC we see there’s also maybe another group, and we call them digital engineers, and they are positioned exactly between IT and OT. It’s like a converged team of the experts who are able to be a partner for the systems integrator and are able also to be a connector between IT and OT within the company, and these people, they typically are managing IT/OT deployment projects. And they also take care of the logic and of the overall architecture. And of course the data management—that’s another part of what they do.

Kenton Williston: There’s a lot to think about. You’ve given me a lot of good points there, but I’ll see if I can summarize everything you just said by—basically, there’s two key elements. There’s the “what are you doing,” but there’s also really the “why are you doing it.” You need to understand the perspective of the other side of the table, as it were. So, Sonny, something that’s making me think about is, we heard a little bit from Jan just now about how the end customer needs to have people who are bridging this gap. There’s a real good to having people specifically in that role. But what about from the systems integrators’ perspective?

One of the things you talked about was matchmaking between different systems integrators. I’m sure that’s a very important part of it. I imagine also it’s pretty important to be able to identify the right solutions that are already designed with this type of IT/OT convergence in mind. Hopefully I’m not leading the witness here too much. Is that a key consideration? Anything else that you think is really important for SIs to consider?

Sunnie Weber: I think what this really means for the systems integrator is that there’s actually greater opportunity. To Jan’s point, they do need to scale up, or at least educate themselves so that they are familiar with both sides of the world, and then be in that position to help the end customer merge those worlds as well. So there’s this consultative approach that they can take in order to answer this holistic solution. If everybody is able to start having a conversation with the value and the experience that they want out of it first, it’s really going to open up the conversation for that greater opportunity that they can deliver on. What I see the most is that the enterprise customers are in that position where change is being forced on them in order to remain agile enough to stay ahead, yet they may not recognize that. And so the systems integrators are going to be that voice of reason, that voice of consultation that, “Hey, this is actually what’s happening, and why you need to remain agile and be able to stay ahead.”

So they need to be able to improve their operational efficiencies, provide that faster time to market their products and services to meet the demand. And, again, that flexibility to respond to changes in things like product quality and maintenance services using reliable data analytics that Jan was just talking about. But having this greater opportunity—I’m just going to go back to it again—it requires having the best parallel partnerships to be able to deliver and drive more business. So that’s what’s going to allow a systems integrator to position themselves as a trusted advisor and a long-term strategic partner who can support that digital transformation, the IT/OT convergence that the customers are demanding at the edge.

Kenton Williston: That makes sense. And, Sonny, I think one of the interesting things you’re pointing to there is this sense that companies are being forced along in this direction. And I think it’s always helpful to take these changes and look at them more as opportunities than as challenges. I think that shift in perspective can really bring a different thinking. So, Jan, I’d love to hear a little bit more about some of the opportunities you see ahead. So, for example, one of the things that people have been talking about a lot in the last little bit is this idea of a metaverse. Are there new opportunities ahead in spaces like this that companies may not be thinking about already that they can reconceptualize why they need to do IT/OT convergence?

Jan Burian: Yeah, good point with the metaverse. I can get to that a little bit later, but let me just say that what we consider—the organizations need to be more resilient, generally speaking. That means they should be more transparent, more flexible, and be really able to react on almost any disruption that might appear. So, no one knows what’s going to happen. So, even in months or for longer term, it’s almost impossible. So that risk-based approachthat was applied in risk management, that’s already the old thing. So the resilience is probably like a combination between the resiliency concept and the risk-based management, is the best way for the future. And this is where the technology is really helping, through providing the data. It doesn’t have to be real time to be better, but almost in a near real-time data—some of them being processed on the edge, some of that being processed on the cloud.

So that definitely helps the organizations on their transparency, flexibility journey. There’s also so many, maybe not new issues, but I would say maybe some issues which are more important than the others. I mean, from the conversations with the end users, we always hear about capacity issues, people issues, or people and organizations. It’s very hard for them to drive the capacities. So, one day they have too much, and then the other day they don’t have people to produce something.

So that’s a big problem with the supply chain as well. So that’s why also companies are looking for new ways how to improve the customer experience, how to secure new businesses. And this is where we get to that metaverse idea, for example, which is a totally virtual world. We probably know that from the environment like a Fortnite or Roblox on these types of worlds, where also industrial players have already stepped in and they are selling or promoting their products or their brands in that metaverse—that’s one part; I call it “civil metaverse.” But there’s also the industry metaverse, which could be—and this is more like digital twin based.

And, by the way, we didn’t mention “digital twin” during our podcast, but that’s one of the key solutions or outputs wherever—when it comes to the convergence of IT and OT. So, for this industrial metaverse, where the manufacturing organizations could be building the entire virtual production plans, which they can use for—there could be a number of use cases, from the simulations or the testing or customer experience improvement, and so on. These digital twins should be driven, fueled, or powered by the data coming from a real environment. And this is where convergence between IT and operational technology is happening. Definitely, as I said at the beginning, the future would be even more about convergence of IT and OT systems.

Kenton Williston: Yeah, absolutely. I have to say, again, both of you have given us so many great ideas to think about, but unfortunately we are reaching the end of our time. So, Sonny, I just want to give you the last chance here to add anything you think we might have overlooked, or just any closing thoughts you’d like to leave with our audience.

Sunnie Weber: Yeah, sure. The advice that we’ve been giving, and the training we’ve been giving our own sales field is that sometimes the best way to have this conversation on IT/OT convergence is to start at the end. What is the value that the end customer’s looking for? Because you need to be able to help the partners and the end customers define, communicate, and deploy these value-based solutions that really inspire them and their customers, changing their business outcome. And then you can begin the evaluation of both the IT and the OT forces. So, for example, identify what their current capabilities are, how do they source data? What is their end-to-end interconnectivity enablement? What device management systems are they working with? How are they managing compute, and what is their analytic setup? And then you can take that and say, “Okay, are these actually working together in this continuum to be able to provide the information they need that produces the outcome they’re striving for?”

And so, a systems integrator can walk their customer through this conversation, through that continuum—that’s when they can identify, for example, what is their existing quality control methodology? And how is their supply chain for operations management? Does it apply the benefits to the bottom line? All of these things end up helping to enable those better operational models that buffer them from situations like COVID, allowing them to be more agile and responsive. And so when somebody is able to help identify their customers strengths and weaknesses, that’s when they can tap into the just right partners, and then show up as that comprehensive, trusted advisor. So taking that time to dig in during those initial conversations, and then covering the true value and experience they’re trying to deliver—that’s going to take their conversation from stopping at, “Hey, I just need some machine condition monitoring.”

It’ll turn that conversation to, “Oh, actually what I think you’re saying is, you want to improve product quality to drive business revenue and keep your customers coming back.” And that’s when you can bolt on the additional conversations around, “Well, maybe we need to think about employee safety monitoring in addition to this machine condition monitoring. And how can we turn these improved and targeted data analytics for tracking the quality control?” It becomes this holistic-enablement conversation of a greater value and service at the end of the day. So what that does is it provides greater value to the end customer, and it provides more business for the systems integrators.

Kenton Williston: Perfect. Well, with that, Sonny, I just want to say thanks so much for joining us. Really appreciate your time.

Sunnie Weber: Thank you so much.

Kenton Williston: And, Jan, I’d like to say thank you to you as well.

Jan Burian: Thank you.

Kenton Williston:  And thanks to our listeners for joining us. To keep up with the latest from IDC, follow them on Twitter and LinkedIn at IDC. And you can also follow Intel on Twitter at IntelIoT and on LinkedIn at Intel-Internet-of-Things. If you enjoyed listening, please support us by subscribing and rating us on your favorite podcast app. This has been the IoT Chat. We’ll be back next time with more ideas from industry leaders at the forefront of IoT design.

 

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

 

This transcript was edited by Erin Noble, copy editor.

The Answer to Commuter Chaos? AI Traffic Management Systems

As thousands of Washington, D.C. drivers headed to Arlington National Cemetery for the Armistice Day ceremony, they found themselves stuck in the world’s first traffic jam. On November 11, 1921, the congestion trapped motorists in their cars for hours—along with one very displeased President Harding, whose limousine had been caught up in the middle of it all. People were frustrated, tired, and unaware that they were making history.

Just 100 years later, urban traffic chaos persists. But AI traffic management systems may offer a new solution to this century-old problem, while at the same time addressing the sustainability challenges of the future.

There are good reasons why cities have struggled to solve traffic management challenges.

In an ideal world, urban planning would save us from our traffic woes. But in historic city centers, where the road layout is inherited, this approach isn’t feasible. That’s especially true in emerging markets, where many streets are old and narrow, budgets are limited, and other infrastructure priorities take precedence.

Technological solutions have limitations as well. Loop detection systems are a help, but they’re basically just car counters. They can’t provide the kind of detailed data needed to model and predict traffic. Cloud-based traffic management systems are somewhat better, but suffer from latency issues that make them unable to adapt to sudden changes on the road.

“The crux of the problem is that traffic flow is inherently unpredictable,” says Jonny Wu, Senior Director of AIoT at Ability Enterprise, a manufacturer of edge AI smart cameras. “The bottom line is that if your solution can’t adapt to traffic flow changes in real time, it’s going to be suboptimal.”

AI Traffic Management: A Synergy of Edge and Cloud

The application of edge AI technology to traffic management has opened up new possibilities. In itself, edge computing isn’t new. It was first used in the 1990s to improve web and video content delivery. But processors are now powerful enough to handle the kind of computational heavy lifting needed for AI at the edge.

Ability’s Agile & Adaptive Transportation Management solution relies on Intel® VPUs, which Wu says are “particularly good at performing the types of visual processing tasks required by edge AI camera systems.”

In practice, this means that AIoT cameras like Ability’s can do a lot more than just count cars. They can identify different vehicles by type, use license plate recognition to track individual cars, calculate journey times, monitor changes in direction, and detect queue fluctuations at intersections.

And that’s a game changer, because this is exactly the kind of granular, real-time data you need to model, predict, and optimize traffic flow.

In an AI traffic management system, data is captured and processed on the edge and then sent to the cloud for additional processing. In the cloud, the historical traffic data is used to model flow dynamics. An AI optimizer then runs simulations to create an optimized traffic control plan.

The plan is pushed out to traffic signal controllers in the field, where the edge AI cameras monitor the flow of traffic and send data to the cloud for ongoing optimization. If necessary, the AI system will automatically adjust the traffic control plan in real time to adapt to changing conditions.

 “#Edge #AI isn’t a replacement for the #cloud—but computer vision on the edge, together with cloud AI optimization, offers a solution that’s more than the sum of its parts.”—Jonny Wu, Ability Enterprise.

It’s this combination—AI on the edge and in the cloud—that makes the system work. “Edge AI isn’t a replacement for the cloud,” says Wu, “but computer vision on the edge, together with cloud AI optimization, offers a solution that’s more than the sum of its parts.” (Video 1)

Video 1. Implementation of an AI traffic management system from data collection to full deployment. (Source: Ability Enterprise)

AI Systems Deliver Significant Results

Ability’s Malaysia implementation is a case in point. The company’s AIoT cameras were deployed in the city of Ipoh, on a busy stretch of road that suffered from heavy traffic congestion.

“It’s a series of four intersections right in the center of Ipoh,” explains Erwin Yong, Director of LED Vision, Ability’s partner in Malaysia, “so we’re talking about a part of the city where it’s basically impossible to widen the road.” Compounding the problem: Three nearby schools were causing traffic buildups during student drop-off and pickup times.

Ability and LED Vision installed 12 cameras across the four intersections. After an initial data collection period, the historical traffic data was sent to a cloud AI optimizer. Once the optimized traffic control plan was fully deployed, the results were striking. Benchmarked against the historical data, as well as Google’s commute time predictions, the system reduced the average vehicle journey time in the area by more than 30%.

Smarter, More Sustainable Cities

If you improve traffic flow, you cut journey time for drivers—and idle time for cars. The obvious benefits are fewer hours wasted in traffic jams and a substantial reduction in carbon emissions. And then there are the not-so-obvious benefits. For one thing, an AI system eliminates much of the human effort needed to manage traffic at busy junctions. Traffic officers are freed up to go where they’re needed most.

In addition, says Wu, AIoT camera systems are versatile: “A camera that you use for traffic management, you can use for other things as well: illegal maneuver detection, speed enforcement, and so on.”

In an era of global climate crisis, cities are looking for new ways to reduce carbon emissions and reach their sustainability targets. Effective, economical, and flexible, AI traffic management systems will be an attractive option to traffic engineers and systems integrators building the smart cities of tomorrow.

Traffic management may be an old problem. But thanks to advances in AI, the future is looking bright.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.