Industry-University Partnership Drives Hands-On AI Learning

Driving a race car around a track is nerve-wracking enough. Now rev that concept up a gear to include an autonomous race car. A self-driving vehicle that can perform at blinding speed while negotiating banked curves and other cars without incident might seem like an impossible feat. But it’s precisely what drives a partnership between Aristurtle, a Greek student car-racing team from the Aristotle University of Thessaloniki, and Cincoze, supplier of rugged embedded computing solutions. Using the Cincoze DS-1202 embedded computer gave the racing team a boost in the Formula Student competitions, an international engineering design competition that gives students a platform to experience, build, and learn about race cars.

Edge AI for an Autonomous Race Car

The Aristurtle team had previously entered the competition with its electric race cars. But during the 2020-2021 competition season, the team decided to develop its first autonomous vehicle (Figure 1). Deciding not to reinvent the wheel from scratch, they modeled their autonomous race car on a previous season’s electric vehicle.

Autonomous race car
Figure 1. The Aristurtle racing team was able to build its very first autonomous race car, thanks to Cincoze’s DS-1202 system and Intel® Core™ i7-9700TE processors.

“Since we used a previously developed vehicle by our team as a base for our autonomous solution, we needed to ensure software and hardware compatibility between the autonomous pipeline and the rest of the vehicle’s electronic systems, in order to minimize unnecessary and time-consuming changes,” says Nikos Kotarelas, Autonomous System Alumnus Member for Aristurtle.

As expected, it was not a simple cut-and-paste endeavor. Achieving human-like perception and sensing were immediate challenges, according to Cindy Lin, Senior Marketing Manager at Cincoze. “Autonomous race cars rely heavily on sensors to navigate the track and avoid obstacles,” Lin points out. “In reality, there may be many other unpredictable factors that can affect sensor accuracy, such as weather and environmental conditions.”

The team found developing autonomous driving software was time-intensive and required expertise in many areas, including software engineering, robotics, computer vision, and artificial intelligence.

Integration of the steering and brake actuators as well as the processing unit on the vehicle proved to be one of the most difficult tasks since the autonomous vehicle had to be drivable by a person as required by the competition’s rules, Kotarelas explains. As a result, the space available for placing these systems in the vehicle was limited.

Driving Toward an Edge AI Solution

Aristurtle decided that an Autonomous Processing Unit (APU) could gather the necessary data from sensors and peripherals and perform the complex computations required by the team’s software to drive and steer the vehicle.

The embedded computing solution chosen had to meet stringent requirements for connectivity, hardware performance, and ruggedness. Eliminating shock and vibration were also key considerations during the design process, according to Kotarelas.

Additionally, the team wanted to reduce the students’ computing burden and allow them to focus more on the integration of automotive electronics and complex AI inference issues. The DS-1202 checked all the boxes.

“We noticed Cincoze offered high-quality embedded computer systems that matched our criteria regarding connectivity, performance, and robustness,” says Kotarelas. “The partnership with Cincoze for a DS-1202 equipped with an Intel® Core i7-9700TE processor meant making our design process simpler and achievable in the short timespan we had to construct the vehicle.”

As #AutonomousVehicles become more commonplace, so will the need for trained #engineers who can steer the industry in the right (safe) directions. @cincoze1 via @insightdottech

“The fanless heat dissipation design, anti-vibration features, and sturdy casing ensured operations in the automotive environment the system was used in,” Kotarelas continues.

Lin explains that it was the Intel processor within the embedded computing system that allowed Cincoze to fulfill complex AI inference and real-time processing of sensor data needed for autonomous race cars.

“Our mission is to offer solutions that address the challenges of high-power consumption, cooling techniques, energy efficiency, and compact size required for embedded computing in response to AI computing demands,” Lin says. “Through this student collaboration, Intel has enhanced the confidence of our automotive customers, enabling faster adoption and shorter time to market for automotive projects.”

A Valuable Industry-Education Partnership

As autonomous vehicles become more commonplace, so will the need for trained engineers who can steer the industry in the right (safe) directions.

Partnerships like Aristurtle and Cincoze are mutually symbiotic. They offer industry a chance to test-drive and better their equipment while facilitating student innovation.

“Autonomous driving is a new field, students need to learn AI and explore new methods in practice, which helps to increase their innovative ideas,” Lin says. Since such partnerships increase opportunities to learn AI from real-world test cases, students strengthen their knowledge of a variety of key subjects in the industry like image processing, AI inference, automatic control, and more.

Laying the foundation for a well-trained workforce in this way can advance the progress of smart transportation technology.

“Having access to companies like Cincoze bring us closer to the industry they represent by gaining insights on the sector they specialize as well as tangible support in the form of a sponsorship/partnership,” says Kotarelas.

The future of embedded computing is on a fast track to adoption. “One of the biggest trends in the embedded computing industry is integrating AI technologies, enabling these devices to perform more complex tasks in real time, while remaining power-efficient,” Kotarelas says.

The key, as the Aristurtle partnership demonstrates, is to understand “the needs of each problem toward creating application-specific platforms, ensuring real-time processing and power efficiency,” Kotarelas explains. Whether those embedded edge AI solutions might help in high-stakes surgeries or to ensure a nail-biting finish on the racetrack.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Computer Vision and AI Power Smart Parking Solutions

Driving in and out of a parking garage should be easy. But sometimes it can lead to frustrating, even embarrassing, problems. For example, on a trip to England, just as I inserted the exit token into the machine, my car stalled. The lift gate closed and one of our passengers had to run to reception two floors up for another token.

I really could have used a smart parking system that day. And I am far from alone. Most drivers have experienced frustration with parking systems. A common issue is having to wait behind several cars while parking attendants collect tolls or check authorized vehicle lists. Solutions such as self-serve kiosks and RFID-based systems can help move vehicles along, but delays like the one I experienced are still common.

“The building that I live in actually uses an RFID-based solution for managing parking access and entry,” says Alhamade Abelgadir, Chief Product Officer at Dubai-based Disrupt-X, a global IoT solutions company. “Sometimes the card doesn’t read easily, so I have to open the window or step out of the car.” Then there are those times when drivers lose their RFID cards, and must get new ones.

Disrupt-X has developed a smart parking solution on its platform called Cognitive Neurons. It incorporates ANPR (automatic number-plate recognition) technology that uses computer vision and AI algorithms to identify vehicles entering and leaving parking areas.

Cameras positioned at entrances and exits read each car’s plate number, which is checked against a database of authorized vehicles, and then opens the gate. The process takes just a second or two. It’s far more efficient than having guards with clipboards checking every car. Or RFID sensors, as Abelgadir well knows.

The Disrupt-X platform provides a full stack of #IoT solutions, with the Cognitive Neurons application being just one of many. @disruptxio via @insightdottech

Smart Parking IoT Platform in Action

The Disrupt-X platform provides a full stack of IoT solutions, with the Cognitive Neurons application being just one of many.

“Most IoT platforms focus on application enablement,” Abelgadir says. “When enterprises procure the platform, they’re still required to make investments to build the use cases on top of that. So there’s a lot of added costs.” Disrupt-X is different. Applications are built in, and customers can activate them one at a time. This keeps costs down.

“We don’t charge our customers for the applications that are there, but rather for the those they use, or for the devices that they connect to that platform. In essence, we’re actually building what you could say is a hyperscaler for IoT solutions,” Abelgadir says.

ANPR was a priority because it fills an acute need in Dubai and neighboring cities. A combination of high temperatures, traffic congestion, and space constraints create parking challenges.

“Dubai itself is a pretty packed city, so there’s not a lot of land area,” Abelgadir says. Many buildings have internal parking, either underground or in upper levels. Manual parking systems are inefficient and pose security challenges. Logs that keep track of vehicle comings and goings are prone to human error, and guards sometimes wave unauthorized vehicles through for expediency.

Cognitive Neurons’ cameras capture the license plate number of every vehicle that goes in and out. It prevents tailgating—sticking so close to the vehicle in front to evade detection. Even if a car gets through, the system won’t allow it to exit because it’s not on the white list. That would actually raise an alarm to security.

To protect the privacy of residents, Abelgadir says, Disrupt-X minimizes the amount of personal identifiable information (PII) collected by the platform, segregating it from the other data it processes.

Another cost-conscious practice involves developing many of the solution elements in-house and making the platform as efficient as possible. For example, a single compact Intel-based server deployed on-site runs 8 to 10 cameras for the ANPR.

A Growing List of IoT Use Cases

Besides Cognitive Neurons’ Smart ANPR application, Disrupt-X also provides plenty of smart building and smart city functionality, such as environmental quality monitoring and predictive machine maintenance, water leak detection, asset tracking, telematics, parking occupancy tracking, and manhole cover monitoring.

“We’re continuously adding to our portfolio. We even have very funny use cases, like rodent monitoring. We use sensors to actually identify where you can have rodent infestation within the building,” Abelgadir says.

When customers invest in the platform, they can access any of the available applications, he says, adding, “To leverage those use cases, they need to connect hardware or devices to it to start receiving the data. That’s when they start actually seeing the use case itself.”

Disrupt-X aims to strike a balance between utility and affordability. Abelgadir says that similar solutions might cost four to five times more. “Whatever you’re paying for should make sense, and you should be getting a return out of it,” he says.

In Dubai and the surrounding region, he says, customers need to see a technology’s value before greenlighting it: “That’s why for us, we focus a lot on cost and the value it delivers.” Keeping costs down also means easing adoption for the masses, additionally simplifying the user experience minimizes training and allows users to complete tasks with two or three clicks.

Intel is a key partner, Abelgadir says: “They provide a great suite of technologies that enable us to develop our solutions.” This includes a hardware/software stack and the Intel® OpenVINO toolkit, which powers computer vision development and capabilities.

“In terms of the business perspective, working with Intel gives us a lot of exposure to their broader ecosystem of partners, customers, and resellers,” says Abelgadir.

That exposure should prove valuable as Disrupt-X develops more smart city and building applications. Going forward, technology will focus on efficient use of energy and other resources through data analytics and automation. The Disrupt-X platform will support these goals with a host of applications: telematics for fleet management, asset tracking, and other smart building solutions.

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Accelerating Developers’ AI Success with OpenVINO™

From diagnosing diseases to detecting defects on the production line to providing deeper insights into customer behavior—AI increasingly transforms the way businesses across all industries operate today. But who truly makes these deployments successful is developers creating AI models and solutions capable of providing business value. When AI developers are equipped with the right tools, technology, and knowledge, they have the power to make all kinds of exciting and innovative use cases.

In this podcast episode, we discuss how AI is used to improve efficiency, make better decisions, enhance customer experience, and provide a competitive advantage. We also explore tools and technologies that allow developers to successfully build and deploy these AI models and solutions as well as touch on some of the latest capabilities in the OpenVINO 2023.0 release.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guest: Intel

Yury Gorbachev, OpenVINO architect at Intel, and Raymond Lo, AI Software Evangelist at Intel.

Yury has held various engineering roles in his seven years at Intel. As an OpenVINO architect, he works with the developer community to learn about their AI development pain points and come up with a technical solution to solve them.

Raymond has been the global lead on OpenVINO for the past three years, working with engineering, planning, enabling, and marketing teams to drive developer satisfaction. Prior to joining the company, he was CTO and Cofounder of Meta Co., a YC-backed company where he worked to build software and drive hardware.

Podcast Topics

Yury and Raymond answer our questions about:

  • (3:47) The evolution of artificial intelligence in recent years
  • (6:36) How developers benefit from AI advancements
  • (9:46) Best practices for successful AI deployments
  • (14:58) The five-year anniversary of the AI toolkit OpenVINO
  • (20:47) New tools making AI more accessible to business users
  • (24:10) The future of AI and the role of OpenVINO
  • (29:17) What developers can expect in OpenVINO 2023.0

Related Content

To learn more about AI development, read Development Tools Put AI to Work Across Industries. To learn more about the latest release of OpenVINO, visit https://openvino.ai/. For the latest innovations from Intel, follow them on Twitter and LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re going to have an exciting conversation where we’ll be talking about AI trends and challenges, as well as get a look at the latest OpenVINO™ 2023.0 release with Raymond Lo and Yury Gorbachev from Intel. And, Yury, I see that you’re an OpenVINO Architect at Intel. Can you explain to me a little bit more about what that means and what else you do at Intel?

Yury Gorbachev: Yeah, so, I’m acting as a leader for the architecture team. Basically it’s a team of very capable individuals, so they are very technically strong in different areas. Optimization, model optimization, code optimization for the certain platforms like CPU, GPU, and then model support—all of those things. And we work on a customer request, we work on new features that we envision ourselves, we work on potential evolution areas that we see as important. And we try to combine them. We try to combine them with what we have. We try to come up with the technical solution and solve the problem in the most efficient manner.

So, pretty much I would say for the past years we’ve been working on so many features on so many interesting aspects of AI that we have built a very strong team with very strong competence. And for me, personally, I just, every day I just enjoy looking at something new and how to add this to our product. That’s pretty much what I do every day.

Christina Cardoza: Well, I can’t wait to see how some of that gets added into OpenVINO. I know we’ll be talking about OpenVINO 2023.O, but before we get into it, Raymond, I would love to learn a little bit more about what an AI Software Evangelist does and how that brought you to Intel.

Raymond Lo: So, I’ve been in the industry about 10 years already. And before Intel I was a developer, right? I was even the CTO of a company that came out of  Y Combinator. And that’s how I got started, because I always build software, and then software I think is where it drives the hardware. And that time you have developer that wrap around building workshop with me. At Intel, that’s what I’m doing: I’m building hackathon workshops and giving talks so people can learn about what can you do, right? As a developer, number one rule is it looks great—how does it work right? And that’s what I do at Intel today, and that’s what inspires me, because all of you have been bringing me extremely hard questions every day.

Christina Cardoza: Yeah, absolutely. And I’m really excited to get into this conversation, because at insight.tech we write a lot about different trends happening in different industries, and AI has been a part of a lot of these new use cases that we’re seeing over the last couple of years. Defect detection and manufacturing customer-behavior analysis in retail, even traffic detection in smart cities. So it’s really been exciting to see all of these changes with AI powering it.

And you mentioned you work on OpenVINO. I know OpenVINO is behind a lot of this stuff, and the developers that Raymond’s working with to make some of these capabilities possible and really translate them into these business values. So I’m excited to hear from you guys, since you have been on the inside, on more the technical side as engineers. Yury, I would love to see how you’ve been seeing AI progress over the last couple of years. What are the trends that you’ve been seeing? Or what are you most excited about that’s happening in this space?

Yury Gorbachev: Yeah, first of all, I would say you are right. I mean, most of the AI use cases that you mentioned are already in production. This is the mainstream now. Quite a lot of use cases are being solved through AI. Customer monitoring, roads monitoring, security, checking of the health of patients—all of those things are already in the main line. But I think what we are seeing now is, for the past year, is a dramatic change in the AI and how it is perceived and what capabilities it is possible to—it is capable of solving. So, I’m talking about generative AI, and I’m talking about this splash of the popularity that we are seeing now with this ChatGPT, Stable Diffusion, and all those models.

Most of the hype is obviously coming from ChatGPT, but there is quite a lot of use cases that are being—that are exploding right now, thanks to generative AI. We are seeing image generation, we are seeing video generation, we are seeing video enhancements, we are seeing text generation—like there are use cases where a model can write a poem for you, a model can continue your text, can continue the paper that you were writing. And all of those things, they are evolving very rapidly right now.

I think, if we look back in like 10 years or so, when there was an explosion in adoption of deep learning, I think it was combined with availability of the data and the availability of the GPUs and the ability to train the models. There was an explosion in the use cases. There was explosion in the models, there was explosion in architectures. So now, the same thing is happening with the generative AI.

Christina Cardoza: I absolutely agree. You know, we are just seeing every day new advancements, new different technologies and models that developers can use. It can be quite overwhelming, because now developers, they want to jump on the latest and greatest advancements. They don’t know exactly how to do it or if they should. And sometimes it’s better not to jump right away and to maybe wait and see how it plays out and then how you can add it to your solution.

So, Raymond, I know in your position you work with a lot of developers, and you’re trying to exactly help them do this and teach them how to build AI in a safe and smart way. So, what can you tell us about the advancements and how you work with developers?

Raymond Lo: Well, to work with developers, I have to be a developer myself. So maybe it’s worth sharing how I started to, because it came quite long ago, maybe 10, 12 years ago. I built my first neural network on my computer at that time with my team, right? I was in the lab trying to figure out how can I track this fingertip and make different poses, just making sure that my camera can understand what I’m doing in front of it. It took us three months just to understand how to train the first model. Today, if I give it to Yury, it’s like, Point me to right there. Two days later maybe it’s all done, right? But to me at that time, even published the paper, building just a very simple neural network took me forever.

Of course it worked at the end. I learned how it works. But through these many years of evolution, the frameworks are available. TensorFlow, PyTorch is so much easier to use. Back then I was computing on my own C++ program. Pretty hard-core, right? Not even Python—C++ coding, right? And then today they have OpenVINO, because back then, I was trying to deploy on my wearable computer. Oh, dear God, I have to look at instruction sets. I was trying to look at, okay, how do I parallelize this work, making sure that it runs faster.

So, today when I talk to my developers or developer in the community, it’s like here—it’s OpenML, I have GPT, everything is in there. You don’t have to worry about that as much, because when you made a mistake in unfurling, guess what happened? Ba boom. It will not run anymore, or it’ll give you wrong results. So those are the things that I find that is a lot valuable today is I have a set of tools and resources that people can ask me. I can give them a quick and “validated answer.”

Then back to the old days, I have my research project that maybe three people ever read it—including myself at that time. And then I finished my PhD. People think this is pretty good, but a lot of work today I see is from undergraduates, right? They have no programming experience. How would this will be good enough, right? So that is the sort of thing that Intel—today we are giving people this validated tool with this kind of back history, right?

Christina Cardoza: That was great. I’m sure it is frustrating, but also exciting to see how much faster you can build these AI solutions than you have in the past. I was talking to one of your colleagues, Paula Ramos, and she was working on something that took her months to do—train an AI model, like you were saying. But then with the tools Intel has available, it took her minutes. So it’s amazing to see all of these evolutions and advancements.

I mentioned some use cases in the beginning—defect detection and manufacturing smart cities and retail. A lot of these applications that AI is now being built into can be very mission-critical applications. There’s a lot of different things you have to be aware of when building out these solutions. So I’m curious, Raymond, what advice would you give developers when building these types of solutions? How—what should they watch out for, or what should they be aware of?

Raymond Lo: This is actually a very good question, because as I speak more with young developers, some of the customers, I listen, right? Like, what do you need to make something run the way that you need it, right? So, let’s say, hypothetically speaking, if the person is trying to put it in a shopping mall, I don’t think they need, like, FDA approval, or any sort of approval to get a camera set up. They need to think about privacy, they need to think about heat maybe, because they want to hide it. They don’t want to have a camera with the rig sticking out. Like, it will happen all the time because the device could be very hot, right? If you’re running it on a very power-hungry device. The more I talk to the people listening, I find out there’s no perfect answer.

But we think about portfolio, and that’s what Intel has. When I first joined, I was like, “Hmm.” My boss was saying, “We have a lot of hardware that runs OpenVINO.” And I was like, “How many?” He was like, “Look at all the x86.” “Like, what do you mean, x86?” “Every x86, yeah, today, right?” I was like, “Yeah, I use it.” Well, I was 18 or younger, right? So that gave me that insight into oh—I don’t need ultra-expensive supercomputer to do inference.

So, as I listen to more—some use cases, like detecting diamonds, it’s real; it’s actually a real hackathon. To figure out if the diamond has defect in it, they don’t need a supercomputer. This is a need for a computer that reads it very well with a very good algorithm. And then I think, everyone loves diamond.; who doesn’t, let me know. But if they look at the diamonds, right? It’s so shiny and pretty, but they can use that to find the defect inside, and then what they require is a very unique system, right? They want it to be in a factory; they want it to be on the edge. They don’t want to upload this data—nope, not happening. They have a factory there; they want to make sure everything happened on site.

So that’s how I felt, like the more we work with our customer, I think we are trying to collect these kinds of use cases together and create these kinds of packages of solution for them. And that’s what I like about my work because it’s—anyone play puzzle games? Every day is a puzzle. It’s good and bad, okay. And you may have something to add to it.

Yury Gorbachev: Yeah, I think you’re totally right. So I think it’s like, the most undervalued platform, I would say, is something that you have on your desk, right? So quite, if not most, developers actually use laptops, use desktops that are powered by Intel, and OpenVINO is capable of running on them and actually capable of delivering quite good, if not the best AI performance for the scenarios that we are talking about. You can do live video processing, you can do quite a lot of background processing for documents, audio processing—all of those things are actually possible on the laptop. So you don’t need to have a data center.

So that’s something we’ve been trying to show for years, and that’s something that Raymond is helping to show to our customers, to developers. We are making resources to showcase that. We are making resources to showcase how you can generate new images on this. How you can process your video to perform style transfer, to detect vehicles, to detect people, and maybe do a few more things, right? So I think this is spot on.

So, from the business standpoint the exact same platform runs in the cameras and the video-processing devices and things like that. But it all starts with the very basic laptops that each and every developer has. And the OpenVINO started there; OpenVINO started to run on those, on those devices. And we continue to do it. We continue to showcase power and performance that you can reach by running on those devices.

Christina Cardoza: That’s a great point. Intel has followed along with this evolution.  You know, like Raymond said, you don’t need such advanced hardware or all of these computers anymore to do some of these things. Intel keeps making advancements year over year to make sure that this is easier to build, this is more accessible to developers or to the business side. like I mentioned with Paula—that being able to train an AI model in minutes—that was a new software tool that Intel just released—the Intel® Geti™, that AI platform just within the last year or so. So it’s really exciting to see the advances that Intel is actually making.

And you’ve mentioned OpenVINO a couple times, Yury. I know you work very closely with that AI toolkit, and that Intel is celebrating the fifth year of OpenVINO this year. It seems like it’s been around for a lot longer, with all of the advancements and use cases that have come out, but, wow—five years. And I know along with this release we have the 2023.0 release coming. So, Yury, I’d love to hear a little bit more about how you’ve seen OpenVINO advance over the last couple of years, over the last five years, and what we can expect from this new release coming out.

Yury Gorbachev: Yeah, so, like Raymond mentioned in the very beginning that he was starting with OpenCD, so I have to say originally, most of the team, that we have actually started by working on the OpenCD by developing OpenCD, and then eventually we started to develop this open-source toolkit to deploy AI models as well. So we borrowed a lot from OpenCD paradigms, and we borrowed a lot from OpenCD philosophy. And we started to develop this tool. And I have to say, since we’re working on OpenCD we were dealing a lot with computer vision. So that’s why initially we were dealing with computer-vision use cases with OpenVINO.

Then, as years passed, and we have seen also simultaneously tools that were evolving, like TensorFlow, PyTorch. We even started—initially, when we started, Caffe was the most widespread framework. Nobody even remembers that now, but that was the most widespread framework, that people were attending webinars, attending conferences, just to develop models in Caffe. Nobody remembers that. But we started with this.

We’ve seen the growth of TensorFlow, we’ve seen the explosiveness of PyTorch, all of that. So we had to follow this trend. We’ve seen the evolution of the scenarios like computer vision initially, close image classification. Then, oh man, object detection became possible, segmentation, all of those things. So new models appeared, more efficient models, methodologies for optimizations of the models. So we had to follow all of those.

We initially made just runtime, then we started working on the optimization tools and optimized models for community. Then eventually we added training-time optimization tools. We added more capabilities for training models and things like that. So we had to adapt our framework. And I have to say, initially we started with computer vision, but then huge explosiveness happened in the NLP space, text-processing space. So we had to change quite a lot in how we processed, how we processed the inferences in our APIs. So we changed that; we changed a lot in our ecosystem to support those use cases.

So, like about a year ago we have released OpenVINO 22.0, 22.1, actually that time with the new API. And that was because we wanted to unlock the NLP audio space and all of those scenarios that we were supporting not very efficiently. So now we are seeing the evolution of, as I mentioned, generative AI, image generation, video generation. So we adapt to those as well.

And I would say in parallel, so this is the functional track, right? But then in parallel, as you mentioned, Intel evolves a lot. Intel produces new generation after generation. As we introduced discrete GPU we are evolving our integrated GPU family and client space, that data center space—all of those families were evolving. So we had to keep up with the performance that were—that is capable of providing—that those platforms can provide. So all those technologies like VNNI; now there is a Sapphire Rapids with IMX, discreet GPU with systolic arrays—all of those things, we had to enable them through our software.

So we worked a lot with the partners; we worked a lot across the teams to power those technologies to always stay best performing framework on Intel. So we—if you take a platform that you had in the laptop when we started, and if you look at this now, I would say probably the clients now in terms of AI could be compared somehow to the data center processors when we were starting. So, huge evolution in terms of AI, huge evolution in terms of performance, in terms of optimization methods, and things like quantization—all of those things.

So we were looking how regularly we measure how we evolve generation over generation, and it’s not like 5%, it’s not like 10%; sometimes it’s twice better, three times better than generations before. So it’s very challenging to stay up to date with the latest AI achievements, the technologies that AI is capable of solving, as well as powering all these generations of the hardware that we are dealing with.

Christina Cardoza: Yeah, and one piece of this puzzle that I think is also important to talk about is you’re on the developer side; you’re making things just so much smoother, being able to build these applications, being able to have them run—these high-compute solutions run very easily for businesses. And I think at the same time—like I mentioned the Intel Geti. You have this solution that came out that now makes it easier for developers to work with the business person. Or like Raymond mentioned earlier in the conversation, people that don’t really have programming skills are now able to get into AI and build these models, and then OpenVINO can really carry it through to make it a reality.

Raymond, can you talk a little bit about those two solutions—how they work together, and how you’ve seen developers and business teams improving solutions?

Raymond Lo: So, the way I see development today is more about the—well when I say end-to-end, right? We hear that term a lot. It’s really about, you have a problem statement that you want to solve. Remember my first example? I just want to figure out what my finger’s doing in space just to track the fingertips? That requires some training, right? So, it requires having data about, okay—this is pointing up, or this is doing a camera shot. I was trying to do something fancy, right? So, same for that; we noticed that’s the gap.

So that’s what Geti fills in, where you can provide a set of data that you wanted the algorithm to recognize something, let’s say, it can be default, can be sort of like a classification of a model, of an object. Then that process often, as we said before, that took me many years, right? To understand how it works. But today it’s more like, okay—we provided interface, we provide people the tool, and then also the tool is not just like click-and-drop, but they have those fine tuning parameters. You can really figure out, let’s say, how you want to train it. You can even put it, let’s say, with the dataset, so that every time you train it you can annotate it and also get that—we call it an active-learning approach.

So back then when I do the dataset, I, like, label every one of them by hand. But today we can have—let’s say you start three of them, then the AI will figure out, okay—I’ll give you 10 more we think are most likely the one that you want to highlight. And then after you give them enough example, the AI will figure out the rest of it for you, right? Just think about the whole training effort, right? We take away a lot of those burdens that, seriously, don’t make a lot of sense for me to do that when you can have an algorithm from that does it better than me and then we can do it more effectively.

So that’s what Geti is really about, right? Bringing that journey from an idea. Now you have ways and ways to tackle this problem—to getting a model that is deployable on OpenVINO. And that to me is a very new thing that we are putting together. And again, when we look at Geti, the team had the experience building this too. So I really recommended next time you find us and ask them, what’s the journey? Because they spent a lot of years behind this. So we just launched, it doesn’t mean we just started last year, right? So we have, almost like, many years ago they started doing machine learning training. And that, I think, is what the Geti is about, is bringing the people, the group of people today, having that difficulty, getting a model running, to get it running today, and, more importantly, bring it to the ecosystem.

Christina Cardoza: We’re already seeing so many innovative use cases, but I think we’ve only really scratched the surface. You mentioned generative AI is coming out now, and there’s still so much to look forward to. So, I’m curious to hear from both of you, and Yury I’ll start with you—what do you envision for the future of AI? What’s still to come, and how will OpenVINO 2023.0 and beyond continue to evolve along with the needs of the AI community and landscape?

Yury Gorbachev: So, it’s hard to really predict what will happen in a year, what potential scenarios will be possible through the AI, through the models and so forth. But one thing I can say for sure, I think we can be fully confident that all of those scenarios, all of those use cases that we are seeing now with generative AI—this is the image generation, video, text, chatbots,  personal assistants, things like that—those things will all be running on the edge at some point, mostly because there is a desire to have those on the edge.

Like, there is a desire to analyze documents locally. There is a desire to edit documents locally. There is a desire to have a conversation with your own personal assistant without sending your request to the cloud, and having a little bit of a privacy. At the same time do this fast, because doing things on the edge is usually faster than doing them on the cloud—at least more responsive. The way, like, assistant for the voice operating right now, home assistant—most of them are operating on the edge. So we will be seeing that all of those scenarios will be moving to the edge.

And this is where OpenVINO, I think, will play huge role, because we will be try—we will be trying to power them on the regular laptop. We are already doing that. We’ll be trying—we will be continuing to do it. We will be working with our customers as we did before. We’ll be working on those use cases to enable them on the laptop. So you will be able to download Chrome extension, or your favorite browser extension, and it will be running some of those scenarios in a very—with a very good performance.

And initially there might be a situation that performance on the laptops will not be enough. Obviously there will be some trade-offs in terms of what optimizations you will do versus what performance you will reach. But eventually the desire would be so high that laptops will have to adapt. You will see more powerful accelerators integrated right in the clients. And then this would be more or less the same challenge as we went through for the past years. We will need to enable all of this, and we will need to enable them in a manner that it’ll be fast, responsive on the edge. So that’s my prediction, so to say.

Raymond Lo: Yeah. And the way I predict the world, I often try to model it, although, like what Yury says, it’s very hard to model something today because of the speed. But there’s something I can always model; is always like, any time there’s a successful technology that happens in this world, it’s always the adoption curve, right? It just—it’s a simple number, like how many people use it every day. It’s obvious that it’s called a bound-to-happen trend. Bound to happen means everyone will understand what this is. We understand the value; they know how to get there, and then they are scaled to it.

And that’s—I think in this release, 2023, marks the part where I see scale, right? We hit a million downloads—thank you again, Yury, one million downloads. That is a very important number. Adoption, right? Then we hit, let’s say, this number of units sold with like all these things. It represented that the market is adopting this, rather than something that is great to have and then no one revisiting, right? The revisiting rate, the adoption rate.

I can tell you, from a year from today, I got to put it in there, we will have better AI. Is it almost sure, but it’s bound to happen, right? We’ll not have a degraded AI; we’ll have a better AI. Some tool may just degrade because, if you look at some of the phones, it’s like, it’s a square box now, what way can you make the phone better, right? It’s a square, right? It’s a physical, it’s a square, right? What can you make right? Can you make it foldable? Yes, that’s what you can do. But for AI the possibility is, we did it from the software, the thinking. So that’s what we think is quite exciting.

Christina Cardoza: Yeah, absolutely. And this has been a great conversation, guys. Unfortunately we are running out of time, but before we go, I just want to throw it back to you guys. We’ve been talking about OpenVINO and this 2023.0 release and the five year anniversary. Yury, I’d love some final thoughts from you about what this five-year anniversary of the toolkit really means. And you touched upon it a little bit earlier, but if you can expand a little bit on what should developers be excited about in this new release, and what they have to look forward to.

Yury Gorbachev: Yeah. So, the release that we are making, there are continuous improvements in terms of performance. As I mentioned, we are working on generative AI, we’re improving generative-AI performance on multiple platforms. But most noticeably we are starting to support dynamic shapes on GPU. This is the huge work on the compilator that we have done. Huge work on the background. And there will be—it’ll be possible to run quite a lot of text-processing scenarios on the GPU, which includes integrated GPU and discrete GPU.

There is still some work that we need to do in terms of improving performance and things like that, but in general those things were not entirely possible before. So now it’ll be possible. We’re looking at capabilities like chats, and they will be running on even integrated GPU, I think.

Second major thing I would like to highlight is we are moving—we are streamlining a little bit our quantization and our model-optimization experience. We are making one tool that does everything, and it does this through the Python API, which is more like a data science–person friendly, a little bit, regular environment for working with the models. So those things are really important, and obviously we are continuing to add new platforms; we’re continuing to add improvements in terms of performance, things like that.

So, and then one feature I would probably say a little bit as a preview or as experimental because we like to get some feedback is we are starting to support PyTorch models. We are starting to be able to convert PyTorch models directly. So there is still some work that we are doing on this, but we are very excited on the work that we have done already. So, it’s not probably to all degrees production-ready functionality, but the team is very excited about this.

And what I would say is that we would be very happy to hear feedback from our developers to hear what they think, to hear what we can improve. Because we did a lot of work, we did a lot of notebooks, we did a lot of coding to make those things happen. And I know Raymond is talking about those notebooks all the time. So he will probably be the best person to talk about it.

Raymond Lo: Yeah. Just, like, don’t trust me, trust the notebooks, trust the examples, right? Because 70-plus examples, you can try from Stable Diffusion, GPT—all of those example that will run on your laptop. Again, high-end laptops for the high-end models, okay? So if you want to upgrade your laptop, it’s the best time now. So, those notebooks will give you the hands-on experience, and that’s where we can communicate. Again, to look for it. It’s called OpenVINO, O-P-E-N-V-I-N-O. Notebooks, okay, notebooks. When you Google that, you’ll find it, and then you find all the great examples that our engineers develop, and that’s where we can, you can get help.

Christina Cardoza: Great. Well I just want to thank you both again for joining us today and for the insightful conversation. I know there’s so many different areas of AI that we could continue to talk about, but I look forward to seeing how else OpenVINO continues to evolve, and we’ll make sure to link in this episode access to learn more about OpenVINO 2023.0, and how you guys can make the switch. But, until then, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

IoT Technologies Power Sustainable Operations

Environmental sustainability is one of the most urgent topics today. If we want to stabilize the earth’s climate, protect our ecosystems, and preserve natural resources for the future, society must take the move to sustainable operations seriously.

There’s pressure on all fronts. Regulatory compliance, financial markets, company brand, cost control, and even acquisition of talent are creating this sense of urgency. That’s why you see aggressive goals for businesses, cities, educational institutions, and others to reach net-zero operations in the coming years. Organizations of all types and sizes are building eco-conscious initiatives at the core of their operating model.

But it’s a steep path in getting there, and IoT technology offers a step forward. Organizations are transforming their operations through innovations in AI, machine learning, and computer vision platforms—increasing agility, productivity, and profitability. And by doing so, they are also paving the way to carbon-neutral operations.

The Growing Business Case for Sustainable Operations

When businesses move to more sustainable operations, it’s not just the environment that benefits but the economic payback and new business opportunities that come along with these changes. The manufacturing segment is one case in point. In addition to addressing the climate change challenge, optimized use of resources like energy and raw materials can lead to cost savings.

Optimizing operations with industrial #IoT solutions can improve machine performance and #PredictiveMaintenance, which not only cuts costs but reduces their environmental impact. @Inteliot via @insightdottech

To be competitive, manufacturers must adopt edge AI and CV for use cases like real-time QC and asset maintenance on the factory floor. Optimizing operations with industrial IoT solutions can improve machine performance and predictive maintenance, which not only cuts costs but reduces their environmental impact—enabling manufacturers to meet their sustainability goals.

For example, working with industrial AI company BirminD, a cement furnace company optimized temperature control—reducing the company’s coal usage by 7%—a figure that equates to 500,000 fewer parts per million of CO2 pollutants in the atmosphere. It achieved these results by installing AI software in the factory machines, an implementation that demonstrates the positive global impact technology can provide.

Another illustration is with global manufacturer Foxconn, which set an aggressive goal to reduce carbon emissions and comply with local environmental regulations. The company worked with Advantech, a manufacturer of edge computing solutions, to manage energy optimization through smart sensors, power meters, and an always-on data collection system throughout one of its facilities. With new visibility into energy use, Foxconn not only developed capacity forecasting plans but also saw immediate improvements in energy efficiency with up to 13% cost savings on average.

Modernizing the Electric Grid Improves Environmental Sustainability

But the problem goes beyond businesses, reaching households across the globe. The impact of climate change and the growing demand for electricity is increasing the challenges for utilities to keep the lights on while also focusing on sustainability, energy efficiency, and decarbonization. Companies that modernize the grid and their delivery of energy services will be at the forefront of revenue generation. But they face multiple obstacles that require electric utilities to rethink how they design, manage, and maintain the power grid.

To start, carbon-neutral energy sources like solar, wind, and battery drive the electricity distribution model to undergo change. We’re moving from top-down, one-way flow of power to a highly distributed network across the other side of the grid. This two-way distributed power requires a level of edge compute that supports making real-time decisions—enabled by AI and machine learning tools.

Building an intelligent edge—starting at the substation where the data lives—and normalizing that data enables greater visibility and insights for faster decision-making. Ultimately, building a data-driven grid allows utilities to maximize the use of renewable energy. The route to achieving this is a software-oriented approach—shifting from hardware- to software-centric, and going from model-based toward data-based, from fixed systems to more agile, scalable, and reliable systems.

Adding more intelligence and more operational capabilities can turn that data into insight, and ultimately to improve the reliability and the resiliency of the grid. 

The Potential of Sustainable Buildings

Beyond changing how the power grid is designed, managed, and maintained, building owners and managers can do their own part to reduce their energy usage.

For instance, the World Economic Forum reports that buildings use 40% of global energy and emit 33% of greenhouse gases. And with more people working from home than ever, some of these buildings are still using as much power even though they could be at half capacity. As such, businesses are transforming their buildings to meet net-zero goals, reduce operating costs, increase efficiencies, and create the optimal environment for a hybrid workforce. Advanced data analytics and AI-powered insights can help achieve these goals.

“Building technology of course has been around for a while, and buildings are already well instrumented. What’s needed is to consolidate all those different workloads onto a common platform, then to look for those insights to drive energy efficiency, lower carbon footprint, and be more energy resilient,” says Michael Bates, Intel Global Sales GM, Energy and Sustainability.

To start, building operators must collect and preprocess data from a diverse set of incompatible systems—from HVAC to lighting, water to air conditioning. Analyzing this data informs how to best optimize the heartbeat of what that building should be, given what it’s capable of doing with the equipment and systems that are in place. When you pull this together, the benefit is that you can start to figure out patterns, and you can build a plan to get there.

Supermarkets are a great example of the smart building payback. Grocery stores are filled with energy-hungry systems such as refrigerators, generators, bakery ovens, and heating systems. One example is Sainsbury—the second-largest supermarket chain in the UK—which has a goal to reach Net Zero by 2040.

Working with Hark Systems, a provider of energy analytics and IoT solutions, the grocer implemented the Hark Platform on 20,000 assets, including lighting and refrigeration. The system retrieves more than 2 million readings per day, detecting anomalies and sending out alerts of potential equipment issues—resulting in energy savings and lower costs. And when energy prices spike in the winter, a preset notification comes into the system from the utility provider, to automatically orchestrate a profile change, reducing the load in the building.

In the future you can see how businesses like Sainsbury’s will be able to become microgrids. Generating its own power and selling it back to the grid means effectively getting carbon-free power. Much of the technology needed for this to happen—solar panels, energy storage units, and platforms like Hark—already exists. This is one path to sustainability where smart buildings can pave the way.

A Vision for Today and the Future

Sustainability is a global imperative, and the adoption of innovative technologies is an essential component to navigating the ascent to a net-zero world. There’s a real opportunity to do something here. It’s not theoretical; it’s not prohibitive; it’s needed. And the appetite is high for IoT solutions that help make it possible.

 

Edited by Christina Cardoza, Associate Editorial Director for insight.tech.

AI Radiology Assistant Helps Underserved Communities

Statistics indicate there’s only one radiologist for every 100,000 people in developing countries, which makes capturing and analyzing X-ray images a major bottleneck in diagnostic healthcare. And it’s not just a shortage of professionals at issue: The gap in infrastructure is also a problem. In a well-equipped US hospital, a radiologist might analyze 200 or more X-rays a day; in an underfunded rural Indian hospital with less sophisticated equipment, analyzing 100 X-rays a day is difficult.

And it’s not just X-rays that radiologists must look at. They constantly have to make tough decisions on how to split their time between modalities such as X-ray, CT, and MRI, especially when they are asked to prioritize CT and MRI scans. All of this combined can create quite the backlog of unexamined X-rays.

To address these issues, radiology advancements such as AI-based clinical decision support (CDS) tools are emerging to help radiologists diagnose X-rays more quickly without compromising quality.

The Benefits of Clinical Decision Support

As the name indicates, a clinical decision support tool is designed to help clinicians analyze images and make decisions. These tools can take on many forms, such as rule-based systems, mapping-based systems, productivity, or automation systems.

Over the past decade, AI-based CDS tools have risen to prominence in virtually every field of medicine that could benefit from the automated analysis of electronic health records (EHRs) and other clinical data. This rapid growth has been driven in part by the reduced cost of using AI to review patient data, as well as new regulatory guidelines from authorities like the FDA that are smoothing the path to adoption for CDS broadly and for AI in particular.

But although the costs of AI-assisted imaging have come down dramatically over the past decade, the technology has remained out of reach for poorer regions. Part of the problem is that AI radiology solutions have focused on specific diagnoses such as tuberculosis or cystic fibrosis. To have a full diagnostic suite, a clinic would need multiple AI solutions, driving up costs.

This focus on specific conditions also limits the tools’ ability to save radiologists time—particularly when it comes to X-rays. “When a patient takes a chest X-ray, you don’t know whether he has condition A or condition B,” explains Mukundakumar Chettiar, Head of the Digital Health Initiative within the medical business unit at  L&T Technology Services (LTTS). “Chest X-rays are used as a screening tool, so you don’t necessarily know what you are looking for.”

Over the past decade, #AI-based CDS tools have risen to prominence in virtually every field of #medicine that could benefit from the automated analysis of electronic #health records (EHRs) and other clinical #data. @LnTTechservices via @insightdottech

The Need for General-Purpose Systems

LTTS developed Chest-rAI, a general-purpose X-ray CDS tool that aims to provide a more holistic approach to AI-assisted imaging. Rather than looking for a particular condition, Chest-rAI examines X-rays for a broad spectrum of abnormalities and potential biomarkers. The tool covers more than 85% of diagnoses encountered at a medical institution and has an accuracy rate of over 92%.

To reach these numbers, Chest-rAI leverages a novel deep learning architecture called Convolution Attention-based sentence Reconstruction and Scoring (CARES). CARES extracts features from radiological images and generates grammatically and clinically correct reports describing its findings, according to Chettiar. Chest-rAI also uses a unique scoring mechanism called the Radiological Finding Quality Index to evaluate the exact radiological findings, localize them, and determine the size/severity for each term present in the report.

In addition, Intel® AI Analytics and OpenVINO toolkits are used to optimize the inference pipeline and reduce analysis turnaround time from eight weeks in most cases to as little as two weeks—and radiologists can access the reports remotely using a web-based interface. The Intel® Extensions for PyTorch (IPEX) is also used to optimize performance. This combination of automated reporting, quick turnaround, and remote access dramatically improves radiologists’ ability to meet the needs of underserved populations.

“Using the Intel toolkits helped our team speed up inference by 1.84 times and cut turnaround time by 75%,” says Nandish S., AI Engineer at LTTS. “And it helped reduce the model size by nearly 40%.”

Because it is highly optimized, Chest-rAI can be deployed in many forms: in the cloud, in on-premises solutions, or at the edge as an embedded solution. This gives hospitals the flexibility to adopt the solution however it best suits their budget and existing infrastructure.

Chest-rAI CDS easily integrates with existing hospital systems and can be used as a standalone application or part of a larger system. The integration process is designed for simplicity, allowing the CDS to get up and running in a matter of days when being tied into existing hospital systems like Picture Archiving and Communication Systems (PACS) and Radiology Information Systems (RIS).

A Smarter, More Affordable Radiology Solution

Over the past decade, AI-based tools have transformed many fields, driving better outcomes across many applications—breast cancer screening, diabetic retinopathies, classification of skin lesions, prediction of septic shocks, and more.

Despite these radiology advancements, radiologist workloads have become a bottleneck to the healthcare system, especially in developing countries. Existing tools have been too narrowly focused to meet the needs of broad screening modalities like X-rays. With the emergence of more general-purpose tools like LTTS’ Chest-rAI, radiologists now have a tool that not only saves them time but also allows them to serve a larger population—just what’s needed in many rural hospitals.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

How Smart Factories are Revolutionizing the Industrial Space

Get ready for the next phase of smart manufacturing, where a plethora of exciting advancements and transformative changes are expected to take center stage. We’ve already seen real-time analytics provide deep insight into operations and production, AI enhance worker safety and defect detection, and autonomous mobile robots introduced on the factory floor. But what about robots building robots or collaborative robots introduced alongside human workers on the production line?

These are just a few demonstrations presented at the latest Hannover Messe conference in April, and there’s still so much more to look forward to.

Listen Here

[podcast player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

In this episode of the IoT Chat, we learn more about opportunities for industrial digitization, tools and technologies enabling manufacturing innovations, and obstacles the industrial space needs to overcome to achieve them.

Our Guest: Intel

Our guests this episode are Ricky Watts, Industrial Solutions Director at Intel, and Teo Pham, a digital trends expert. In his role, Ricky focuses on how he can apply Intel technologies and architectures in industrial environments and make them ready for customers.

Teo is the founder of an online business school for digital skills, Delta School, as well as host of a digital trends podcast.

Podcast Topics

Ricky and Teo answer our questions about:

  • (2:23) Where the manufacturing space is headed
  • (8:03) Upcoming industrial AI applications we can expect
  • (12:21) How manufacturers can take advantage of new opportunities
  • (14:58) Determining the need for CPUs or GPUs
  • (21:49) Benefits of moving manufacturing to the edge
  • (26:50) The role of cloud in smart manufacturing

Related Content

To learn more about smart manufacturing, read Hannover Messe 2023: The Next Phase of Smart Manufacturing. For the latest innovations from Intel, follow them on Twitter and LinkedIn, and follow Teo on Twitter at @teoAI_

Transcript

Christina Cardoza: Hello and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re talking about IoT advancements in manufacturing with Ricky Watts from Intel, and Teo Pham, an expert in digital trends. But before we jump into it, let’s get to know our guests a bit more. Ricky, I’ll start with you. Tell us more about yourself and what you do at Intel.

Ricky Watts: Thanks, Christina, and welcome everybody. Yeah, Ricky Watts. I work for Intel. I am in the Federal and Industrial business unit. My role, to put it simply, is looking at Intel technologies and architectures and how do they apply into the industrial environments—the industrial market segments with the energy or manufacturing process and industry. What are we doing with our silicon platforms, our software-enablement platforms, to bring new and exciting innovations that I think we’re all starting to see in the industrial environment we’re going to talk about today? How do we make them real for our customers? And obviously for those people that are producing goods in the factories today, that’s my role.

Christina Cardoza: Great. And Teo, I introduced you as an expert in digital trends, but for those of our listeners who haven’t been following you, tell us more about yourself and what that means.

Teo Pham: So, I used to be a startup founder and a professor of digital marketing. And in the past couple of years I’ve been hosting a German podcast about business and technology. And so I’m always very curious about the latest digital trends. And so in the past two years I was very curious about things like blockchain, about Web3, and also NFTs. But I think in the last six to twelve months I’ve been mainly giving talks and trainings about artificial intelligence. And I think this is also going to be one of the topics of today’s podcast.

Christina Cardoza: Yeah, absolutely. Like I said, we’re going to be talking about IoT advancements and overall technology advancements happening in the manufacturing space. And I think this is a good time to have both of you joining us today, because Ricky and Teo you guys were both just at the Hanover Messe Conference in April, and there were a lot of new things going on. Ricky, I saw you do a lot of interviews with a lot of different Intel partners out there, so I wanted to hear a little bit more about where you see this manufacturing space headed, based on what you saw at the event and just other things that you’re seeing in the industry.

Ricky Watts: Yeah, good question Christina. And, yeah, I think when I went to Hannover Messe this year, I was six foot two and I came back as six foot. So, I certainly covered a lot of area at the event; it was very exciting. We’re all, I suppose, forgetting it’s post-Covid, so manufacturing was back and forth at the event. I think there were up to about 3,000 exhibits at the event.

What did I see that was exciting? So many things, starting with just the excitement of the people, the interaction between the various people that were there to visit the event and learn from each other about what they’re doing. In terms of technology, I think two areas that kind of absolutely really excite me, and I suppose to some extent concerned me a little bit, I will say, was the rise of AI, particularly around ChatGPT, and how AI and ChatGPT are being integrated into traditional manufacturing to some extent—what is the impact of that and how that’s going to be implemented in the manufacturing execution systems that we have out there today in new technologies that are coming out.

And I saw various booths with some of the larger companies in this space, how they’re using that technology already to integrate it and drive outcomes from the data and drive different things that they want to do in manufacturing to be more efficient, etc. So I thought that was particularly exciting and, to be honest with you, I was really surprised to see how much of it was out there and how advanced it was. I think that was a bit of a surprise to me, I’m going to be very honest with you.

I think—the other thing I think that interests me as well is a couple of things. This thing called 3D reality, omniverse, all this metaverse and things like that—I did manage to see a few examples of how immersive technology is going to be used in the future. Manufacturing industrial is a very complex environment, very complicated in terms of how you operate it around OT principles and IT principles and all the things that I’m sure me and Teo look at regularly. But using that type of immersive technology to really make it much more accessible, and how do I design and build a factory to change the outcomes—can I do that in a 3D virtual reality, which is very visual, and then use ChatGPT and AI to work in that environment to create digital twins that can create obviously what we call physical realities in the manufacturing as well? So that was pretty exciting.

And the last one I’ll mention before I hand it over Teo is, which made me smile, was there were a lot of robotics at the show. As always, robotics is everywhere in manufacturing, for good reasons of course—the logistics and the repetitive tasks that we often see in manufacturing. But one thing particularly interesting to me was robots—building robots to drive outcomes. I thought that was really interesting that a robot is given a task, and another robot builds the robot to drive the outcome of that task. So, I don’t know—let’s say I’m working on a PCB, or the manufacturing of an electronic board: I design the robot, the robot builds the robot to do that task. What was particularly interesting then was using AI for the robot to learn what it needed to do to send off a command to build or design a new tool for the robot. So the robot builds the robot, which builds the robot, which is linked by AI.

So I thought that was fascinating to watch—effectively robots not only being used to do and drive a task, but even the way that they were doing that task using artificial intelligence to design a better robot. I’ll use the word “real time” —as the process is ongoing, it’s optimizing the robot. So there are three things that I thought were exciting. In addition, there were so many other things, but I’ll let Teo jump in. He’s probably got some of his own insights.

Teo Pham: So, it was my first time at Hannover Messe, and to be honest I was very surprised and also amazed at the variety of topics and also participants at this exhibition. Because when you go to a manufacturing trade show, you expect obviously robots, you expect hardware manufacturers, you expect semiconductor manufacturers. But then you also had all these software companies, you had consultancies like PCG, you had cloud-service providers like Amazon Web Services. And I think it just goes to show how varied this whole space is, and that it’s a lot more than just physical devices, but it’s fully integrated with software, with artificial intelligence, with the cloud. And I think this is also how you can really create these new exciting applications.

And, as Ricky said, there was lots of talk about artificial intelligence—we’ll get to that in a second—but also what you mentioned about the metaverse. So, I saw companies, let’s say like Siemens or Microsoft, that were promoting things like the industrial metaverse. So, creating new technologies that make production more immersive, but also a lot cheaper in the sense that you can create these digital twins that allow you to run these amazing simulations so you can really test out things in the digital space before even needing to create them as a physical unit. And so I thought that was pretty amazing. So, Ricky, you already mentioned that there was so much talk about AI at Hannover Messe, but which AI applications in particular got you really excited when it comes to manufacturing?

Ricky Watts: Yeah, Teo, I think I’ll talk about one in particular in the use of AI ChatGPT. In the world of manufacturing we have these things called manufacturing execution systems—MES, or programmaticlogic controllers. Basically it’s a device or an appliance that basically runs the manufacturing. So if I’m building something there’s a bunch of machines that work together to create an outcome. They’re the builders, if you like, the building blocks.

Now these PLCs, they have a language that operates and runs with them, it’s called 61131. It’s a PLC code—a way that you actually design and run that PLC. And I think one demo that I saw was ChatGPT being used to build that code. That is typically a manufacturing engineer or somebody in that advanced systems integrator that’s writing that object code to create that outcome which they then apply into the manufacturing. It’s the bit that controls the machines, as I mentioned—this is now ChatGPT building that code. So something that might typically take an engineer to build that type of code and build it out could take weeks, months to do, ChatGPT was doing it in, I’m going to say minutes, seconds.

Now, I was excited to see that as an application for me, because that’s something in—when we look at manufacturing, one of the big issues that’s been as we’re moving towards this digital transformation world is the skills that are involved in the digital transformation of manufacturing are different to the ones that have been there in what we would call the mechanical, if you look at the previous industrial revolution. So you’ve got lots of very experienced engineers that are starting to come out of the workforce; you need to replace them with new people coming in who broadly come from a different type of perspective. They want to be much more involved in technology.

So when you see ChatGPT being used in this environment there’s a couple of benefits. One is obviously the speed and the rapidity of being able to do that, but it also helps them address some of the code issues. So I was excited to see that. I’m going to stress that it’s early, what it’s doing. But the potential of that technology in AI to be implemented—and link back to what you said, Teo, about omniverse and things like that—the reality is manufacturing is very much a structured environment driven around a set of standards, as we know. So, but as we’re starting to go into this kind of new world, the ability to be able to do that with AI and link that to some of the integrated engineers I thought was really exciting.

As I say, very early use cases: they were at pains to point out that there were some mistakes in the code, but I thought that given where AI is going and the use of that code, it will not be very long before the accuracy and the ability to deploy that directly to those machines is really going to become relevant. So that was one AI use case that I have to say completely, I was like, “Whoa, wow, this is fantastic.” In manufacturing, Teo, this will have a huge impact in the years to come. And I think when we see next year’s trade show, I think we’re going to even see more advancements in the use of AI in these models.

Christina Cardoza: Yeah, I love hearing all of these examples, especially with the ChatGPT use case, because, like you said, this is just something that’s coming out now, and I think a lot of the use cases there have been more writing articles or more content driven, but to see it actually writing code and that being applied to solutions on the manufacturing floor, that is something really interesting and exciting to look forward to over the next couple of years.

But, like you said, a lot of this stuff is still early days; these are just early examples or applications. So, Teo, I’m wondering from, sort of as an outsider looking in, where do you see that the manufacturing opportunities really are now, from what you’ve saw at the event? There was obviously so much going on, it could be very overwhelming. Where do you think manufacturers should be focusing their efforts now, or how well do you think they’re equipped to be taking advantage of some of these new opportunities available to them?

Teo Pham: So, again, coming back to artificial intelligence, I think it just allows you to speed up all of the processes to make them a lot faster, a lot cheaper. Obviously there’s always a lot of talk about ChatGPT—so this is text-based AI—but you also have AI tools that can help you generate images or blueprints, generate videos, even computer code that Ricky mentioned, complete websites or applications. And so I think this is super exciting. So I think the cost of generating an, let’s say, an 80% solution will go down to practically zero. But then obviously you will still need some very experienced people to go from 80% to 100%.

But I think oftentimes it just takes so much time and effort to go from 0 to 80. And so I think having artificial intelligence will speed up a lot of that. And I think there will be some very fancy applications—let’s say like AI 3D modeling and stuff like that. But I think even for fairly boring stuff, like documentation or translation, I think this will be so helpful because you can get all of that stuff within minutes.

And I also like the idea of being able to talk to the machine, to have some kind of dialogue. So, you don’t need to necessarily know everything from the start, but let’s say you want to understand something, or you want to understand some issue that’s going on, you can just start off with a fairly general question and then just dig deeper and have a real conversation, as if you were talking to a technical expert. And I think this is super exciting because it just allows you to go very deep into any type of subject matter, even if initially you’re not necessarily a top expert, but it just allows you to go deeper and deeper and really accelerate your learning.

Christina Cardoza: Yeah, absolutely. And sometimes those boring solutions are the most important solutions or aspects to manufacturing. But it sounds like we’re talking about all of these AIs doing all of these things too. It sounds like manufacturers are going to need to have a lot of hardware or a lot of power in their tool set to be able to make some of this happen. Would you agree?

Teo Pham: Definitely. And this is also something I wanted to ask Ricky. So, obviously there’s lots and lots of data, you need lots of processing power, and I think it would be really useful for our listeners to understand, okay, when do we need CPUs? When do we need GPUs? What’s the difference? What’s the difference between AI model training and inference? And also what solutions does Intel offer in those areas?

Ricky Watts: CPUs and GPUs both have roles and advances in the use of AI, but if we think of manufacturing and we think of AI, AI really relies on data, the abstraction of data, what they call data engineering effectively to get that out. And then effectively what you’re going to do is the learning part, and then you’re going to do the inference. So, and I think what you’ve got is different types of compute platforms and environments. So CPU, GPU or FPGA are always involved in those environments.

Manufacturing’s very interesting. A lot of the early use cases in AI have been really around visual use cases, video use cases. I put a camera into a manufacturing environment to analyze something, and then what I want to do is I want to train a model around those images that are coming up. Let’s say I’ve got a production line and I’ve got something that’s coming out of that production line. I don’t know—let’s say it’s a bottle with a label on it, and I’ve got a camera over that bottle and I want to know, “Hey, is the label on correctly? Is the label on the right way round? Where is it structured in there?” So we create images around that, and then what we would do is we would train models. That training is generally done in a GPU type of environment, because it requires a lot of intensive processing to do that parallel processing around those images.

And then what we do is this thing called inference, which is now I have something that knows what’s good, knows what’s bad, effectively. Okay, then I want to apply that in a manufacturing environment. I can’t keep learning all the time; it’s too difficult. So what I want to do is I want to do this thing called inference. I want to then take images as much as I can as I’m seeing that production line generate those goods and they’re coming out, I’m using the model and I want to apply that. And that’s really where you start to see things like CPUs come into account, because it’s very much tactical at the end. It’s about applying something very, very close to where the manufacturer’s coming in.

So you’ve got CPUs and GPUs, which both have an area where they have some expertise. But what we’re starting to see from an Intel perspective is starting to integrate some of these things. You’ve seen it with some of our new technologies, particularly around the latest Xeon® chip that came out recently, the Sapphire Rapids chip, where we’re starting to integrate packages and capabilities into the silicon. Now of course that chip is being used in the data center. A lot of the training, etc. is done where you’ve got a lot of compute power, which has typically been in cloud environments. The inference in most cases is done where the manufacturing is; you want it done at deployed, so, at the edge.

So we’re now starting to see these compute platforms in these environments kind of go from edge to cloud. So you’ve got CPUs, you’ve got GPUs, you’ve got FPGAs involved in that. The last thing I would say is, from an Intel perspective, is if I start to think of these environments that I’ve got, there’s two sets of data: the video I mentioned, but the one that’s much more pervasive in manufacturing is what we call time series data. The world of manufacturing has what we would call fixed-function appliances: machines, a robot or a conveyor belt or a system that’s doing a press or something like that. That is generating data. It’s not vision data; it’s data that is coming off the machine. It could be heat, it could be pressure, it could be vibration, it could be performance—all of these things.

That type of data, from an Intel perspective and from a manufacturing perspective is much better and optimized to run on CPUs at the edge as well. So you can do the training and the inference at the CPU, at the edge, and where data integrity and data sovereignty is becoming very important. And I know it’s not an area necessarily that we’re talking about today, but a lot of our industrial manufacturers, they have a high consideration on the value of the data that they’re generating and where it gets analyzed and what they’re doing with it as well. So they want to bring that inference and that training right to the edge as well.

So, our integrated offerings of our CPUs and our GPUs.  We’ve got a new portfolio of GPUs coming out—I’m sure you’ve read a lot about that too. It’s early days for Intel in that space. But we are learning fast, and we’ve got more products coming out over the next few years. On our CPU side I mentioned Sapphire Rapids. We’ve got some new products coming out; they’re going to integrate even more AI capability down there at the edge. So I think, for us, it’s integrating the hardware solutions, and then on top of that providing a uniform architecture for people and developers in the AI space to access those technologies.

And you may have heard of this thing called oneAPI. Basically it’s an ability for developers using whatever code—Caffe, Python, all of those things that they are doing—how do they get access into that? How do they work within that data set—OpenVINO Toolkit? So, we’ve built a number of toolkits and optimizations on those integrated packages. So whether you are a developer who’s working in the environment from an OEM or systems integrator or indeed even with some of the large manufacturers who may have data scientists, we want to make our silicon accessible and easy for those developers.

So, irrespective of being CPU, GPU, or FPGA, we optimize underneath; you tell us how you want to run what the workloads are, and then we’ll deploy those workloads into the right silicon platform at the edge and then provide a uniform capability to take that to the cloud as well. So that’s kind of what we’re up to. Sorry—long-winded answer, Christina and Teo. Because it’s a deep question.

Christina Cardoza: No, absolutely. And there’s a lot going on. I think that ecosystem was certainly on display at Hannover Messe. You gave that, “If you had a bottle and a label on it,” example, but I saw that exact demonstration from Dell Technologies at the event. Those robots that we were talking about, collaborative robots working with robots, VMware was showcasing that too. And NexCOBOT was actually showcasing how you’re controlling robots and using AI and all of these things, all powered by Intel processors and Intel technology.

And, to one of the earlier points you made, this was one of the first conferences really out of Covid. And so talking about all of these edge use cases, this is one of the first times the industry was able to get together and start talking about, not only talking about edge, but showcasing how this is really possible in an industrial environment.

So I’m just curious, going further with this edge conversation, what are the benefits that you’re seeing the manufacturers really gain from now moving to the edge? And you sort of talked about this, but expand a little bit on how they play on some of the trends that you’re seeing where the space is going.

Ricky Watts: Again, Christina, it’s a good question. Look, manufacturing is a very competitive business. There are manufacturers all over the world that are creating goods, and there’s a lot of competition out there. So the use of data and the use of these things in these environments can often give a very competitive advantage to an end manufacturer.

And I meet manufacturers of all different sizes. We are very aware of some very large manufacturers. Typically, in the markets that I cover, the auto manufacturers are discrete—in which “discrete” is about making things, as I call it, so, physical items, very much so. Whereas process manufacturer is about mixing things, chemical manufacturers such as BASF, etc. So, between those two environments what they want to do is they’re in a competitive environment. So if they can implement technology to give them a competitive advantage, to change an outcome, to improve a process, that in itself will give them an advantage that they can then pass on to their customer, or they can improve their margins, or they can do many different things in that.

So when I look at some of the things that manufacturers are doing with this technology today, it’s really can they apply that technology in a business-driven outcome? It’s very easy in our environment, particularly with what I do, to forget at the end of the day it’s not about the technology, it’s about what the outcome is. What is the benefit to a manufacturer? And I’ll give you some statistics that I’ve seen recently, which are really interesting, which is over 80% of use cases that are using data fail, because it’s very hard to deploy these things and deploy with an outcome that a manufacturer says, “Well I’m doing this thing, but I’ve not got the benefit that I expected.”

And so what we need to do in the technology industry is make it easier for those people to consume. You mentioned VMware and Dell, some of the things that we showed actively at the event—VMware, the robotics. So it goes back to what I said earlier: manufacturers want to use this technology. They’re not experts in AI, they don’t have data scientists, they don’t have people like me and Teo to kind of help them out. They’re really—they want something that’s simple, that they can use.

So what we are trying to do within that ecosystem is give them something that really is—to coin a phrase as much as possible—the easy button. And, again, that’s difficult in a sense, and still there’s the complexity, but our partnership is about bringing that technology to them, making sure that we’ve built the relationship with Dell and VMware. If you have these architectures you can deploy this. It’s relatively simple to go in. And then making sure that those optimizations for the use of AI—whether it’s video data or time series data—we’ve made that much more seamless, so that when somebody implements that technology they can access the benefits of that much quicker.

And that’s really where we’ve been spending a lot of our time as well, with that ecosystem approach, which is, I think everybody knows the advantages of AI. I don’t think you need to convince anybody. What we really need to get to is how can somebody implement it and truly get a benefit from what you’re trying to do? Do they improve the outcome?

You go back to my bottle example, Christina. If I’m putting through a hundred thousand bottles a day, and let’s say as an example 5% of it is inaccurate, okay? So I might be throwing 5,000 bottles away a day. That’s a sustainability issue, that’s a profitability issue. If I can reduce that failure rate to 1%, that has a massive impact on the performance of that bottling factory. He has less goods being returned, he’s improved his sustainability, and of course his profitability comes up. So we’ve got to make that as easy as we can to consume, and we’ve got to make sure that that technology is accessible for everybody in manufacturing, not just large-scale manufacturers that have huge departments of engineers and data scientists. So that’s kind of where we’re at. And I think you are starting to see a lot of that progress coming up.

Teo Pham: When we talk about the implementation of AI, I think one of the decisions we have to make is about whether to do it with edge computing or cloud computing. And I wanted to get your thoughts on that, because obviously there are some advantages to edge computing: it reduces latency. Also in terms of data privacy—we don’t have to send it to a cloud. So these are very obvious advantages. On the other hand, we need to invest more in hardware. This hardware could be costly, it takes up some space, it generates some heat. So what are your thoughts on edge versus cloud, and what is Intel’s approach on this topic?

Ricky Watts: Let’s take it holistically. I think it’s both. There are distinct advantages with both scenarios. The cloud has what I would call cloud-scale compute, elastic compute, but getting the data into that cloud is very expensive. These systems that are generating data in manufacturing can be extensive. The volumes of data are massive. The cost of transporting that data is massive to get it into the cloud, let alone then, of course, some considerations around regulation, data sovereignty and privacy, security, etc.

So as a manufacturer looks at what he’s trying to do, he’s considering a lot of things, he’s looking at what is the use case that I’m doing? What is the outcome that I’m trying to generate? What are some of the considerations that I need to do, and what’s the benefit? I think it’s use-case driven, to your question. There’s a lot of advantages for doing training in the cloud, doing inference at the edge. And then as more and more powerful compute comes down to the edge and we define that, not only the training and but the learning can be done at the edge as well, and you’re going to see more and more compute move to the edge, and then that in itself will then connect back through digital twins—things that we talked about earlier—into what we call the manufacturing execution system.

So I see a problem or I want to do something, I need to act on that, and I need to act on that in a very low-latency environment. So in my mind more processing goes to the edge. But I do believe that there is absolutely going to be a very integrated offering between the edge and the cloud for abstractions and things that you might want to do when you get this massive cloud-scale compute.

Christina Cardoza: Yeah, absolutely. And this has been a great conversation, guys. We’ve talked about AI, robotics, edge, and still so much more we could talk about, so many more different areas that we can get into. But unfortunately we are running out of time. So, before we go, I just want to quickly throw it back to you both one last time. Any final thoughts, any key takeaways you want to leave our attendees with today? Where the industry is headed, what needs to happen to get there, and what’s still possible for the future? So, Teo, I’m going to throw it to you first.

Teo Pham: So, some people are saying that currently we are witnessing the iPhone moment of artificial intelligence. What does that mean? Even before the iPhone came out in 2007, obviously we had regular phones, but we also had mobile phones, right? But still the iPhone changed everything. And today we can’t even imagine a world without the iPhone, without smartphones, without mobile apps.

And, similarly, artificial intelligence has been around for 50 or even 60 years. There had been research in AI in the 1950s, but I think currently we are in this kind of virtuous cycle, where a lot of things are just creating this kind of perfect storm. We have lots and lots of data, we have the necessary compute, we have the models, and we have very easy-to-use interfaces like ChatGPT.

And I think each player in those different areas, they’re making so much progress. OpenAI is making so much progress with their models and their user interfaces. Intel is making so much progress with their compute, and I think each of them is contributing so much to this virtuous cycle that maybe in even in six to twelve months the whole space might be unrecognizable because there’s just so much progress. I mean, just six months ago no one could even spell out “ChatGPT,” no one knew about it. And today hundreds of millions of people are using it. And so I imagine that even in the fast-moving space of technology, we’re in for a pretty fun ride over the next few months.

Ricky Watts: Well said.

Christina Cardoza: Yeah, absolutely. And I love that example, even beyond the iPhone. You think about when phones first arose, we’d never think that we would be using phones the way we are today, that there are these tiny computers in our pockets taking pictures, things like that. And I think in a couple of years from now, things that we see today, we are saying, “Oh, AI—that’s not really going to take off,” or “This is not really possible.” And in the next couple of years we’ll be laughing at ourselves, saying, “I can’t believe that we were doing it this way.” So, really love the way that you phrased that in that point. Ricky, before we go, any final key thoughts or takeaways from you?

Ricky Watts: Manufacturing has a duty to produce its goods, so technology is moving on. I love the way that Teo said it. So I’m going to say technology’s coming. It is changing. Teo is right: in twelve months’ time we could be talking about something we’ve not even heard of today.  And I think that that is something, but ultimately manufacturing has to continue to produce goods. So what I see is manufacturers, they are focused on the new technologies, but they’re also making sure that the manufacturing environments that they’ve got today are going to be there for the next few years. There’s a lot of operational standards and things that we need to do.

So, here at Intel, together with those, we’ll work with our partners, our ecosystem partners, and our customers and the industry as we go through that transformation. But what we want to do is make sure that we keep making sure that we keep the lights on, if it’s energy, and the manufacturers generating the goods that we need to do. So we want to make sure that that transition is smooth and integrated, and is as little disruptional as possible as we go for this industrial transformation.

Christina Cardoza: Absolutely, and to your point, none of these advancements would be possible or will be possible in the future if we don’t have the technology out there that makes it simple for us and easy and smooth, like you mentioned; if it’s complicated, no one’s going to really want to do it or be able to do it.

So, it’s great to hear about all of the technology coming out from Intel, and I know we have a lot more to look forward to. So with that, I just want to thank you both again for the insightful and informative conversation, and thanks to our listeners for tuning in today. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Unified Data Infrastructure = Smart Factory Solutions

Industry 4.0 solutions need to be accessible to manufacturers of all sizes and budgets—but software that’s scalable, cost-effective, and incorporates many functionalities in a single platform can be difficult to find. To get a unified digital system, manufacturers might have to integrate multiple software services and platforms to enable data flow across key areas such as predictive maintenance and process management. But developing integration among unrelated software applications can be a lengthy and costly process—and the results are often imperfect.

The iProd Manufacturing Optimization Platform was designed to solve these challenges and give manufacturers access not only to a unified digital platform but also a business ecosystem where the company shares real-time information with clients and suppliers. The cloud-based, pay-as-you-go solution provides actionable insights from edge devices to support organizations as they automate and manage key business processes. Nine different cloud-based software applications enable optimal systems performance, and with an intuitive, user-friendly interface, manufacturers can:

  • Plan production execution and monitoring
  • Manage assets
  • View and manage KPIs
  • Make purchases and manage supply chain
  • Handle administrative tasks
  • Share drawings and comments in real time

Cloud-Based Software Increases Operational Efficiency

One example of the iProd platform in action is with VHIT Spa, part of the Diesel System Division of the Bosch Group. VHIT Spa operates in the field of hydraulics, developing vacuum pumps for the automotive, hydrostatic controls, and positive displacement pumps for the automotive sector, tractors, and earthmoving machinery.

Located in northern Italy, the Bosch VHIT plant is a leading supplier of critical components to major automotive manufacturers globally, including Peugeot, Citroën, Volkswagen, Audi, Porsche, Daimler, Fiat, Iveco, CNH, Jeep, Chrysler, and VM Motor.

Bosch VHIT is a company devoted to innovation. In recent years it has started a process of efficiency by removing non-value-added activities and enhancing those of work optimization. Corrado La Forgia, Bosch VHIT CEO, explains: “For this reason, the company is on the path to ‘make the machines talk and make them intelligent. It was necessary to receive data from machines to get accurate and valuable information—increasing productivity and having visibility into how many pieces are produced in the day, month, or year—providing the right tools to production managers so they can make the right choices to increase production quality and efficiency.”

The company saw an ROI within just a few months of using the iProd platform. Before the first year, its machines achieved an overall equipment effectiveness (OEE) for a 10% increase in productivity and a 15% reduction in costs. Its flexibility and ease of use allows integration with any other CMR managing software and business process already in use by the company, which limits the costs of installation and data integration management.

Industry 4.0 Solution with Ease of Deployment

The Manufacturing Optimization Platform consists of a rugged industrial tablet, mobile app, and Intel-powered IoT edge gateway for device interconnection. Potential users of the platform include a wide variety of roles, from property and production managers to engineers, administrators, project managers, and even CEOs.

Using the iProd solution, manufacturers can easily interconnect existing machines, IoT sensors, and modules from different brands in any part of the world. “It’s very simple to connect and get up and running,” says Pier Luigi Zenevre, iProd Co-founder and Chief Marketing Officer. “All our data architecture is unified, making it simple to feed in data coming from different machines.” Information is pre-analyzed and filtered at the edge, then sent securely on to the cloud.

Partnership with Intel plays a key role in the solution’s success. “Connecting with Intel was one of the most important decisions we made,” Zenevre says. “It’s a reference technology for us, and a lot of companies approach us because of that partnership.”

“We’ve created an integrated solution that lets #manufacturers develop their talent pool rather than needing to keep many workers close to machines on the floor.” – Pier Luigi Zenevre, @iProd40 via @insightdottech

Preparing for a Machine-Based Marketplace

The Manufacturing Optimization Platform paves the way for a future where industrial machines will become customers. “We anticipate the arrival of automatic factories, where machines do not wait for human inputs but manage their own tasks,” Zenevre says.

One way they can do that is through the solution’s built-in IoT marketplace, which contains algorithms capable of interacting with machines, as well as humans. Purchases can be made in multiple ways: by the employee, the machine and confirmed by a person, or the machine within a given budget.

For example, a connected sensor might inform that it needs a predetermined spare part to continue its production tasks, avoiding machine downtime. It can reorder raw materials according to future production plans pre-set on the iProd Manufacturing Optimization Platform. And a machine can place an order for those items through the IoT marketplace—all without human intervention.

“When the machine is triggered according to predetermined rules chosen by the manufacturer, it’s able to make purchases within a given budget,” says Zenevre. “We’ve been called by Gartner the first industrial platform that will be able to implement this ‘Machine Customer’ technology.”

Ultimately, a system such as this one allows manufacturers to maximize and invest in human resource development. “We’ve created an integrated solution that lets manufacturers develop their talent pool rather than needing to keep many workers close to machines on the floor,” Zenevre says. “Our goal is to see a culture of innovation emerge in manufacturing—one where the operator is a key stakeholder and one of the most important sources of information, because they bring their past experiences and can help define the most efficient ways to run the factory floor.”

 

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

AI Video Analytics Deliver Safety and Security

You can’t be everything everywhere all at once. But an extra pair of eyes helps in a whole range of industry settings—from senior care services to asset management in the utilities sector.

Picture the many ways that analysis of video imagery can proactively prevent brewing trouble: It can alert airport authorities about unattended and unidentified objects in high-traffic areas. It can detect intrusions in sites that host critical infrastructure like nuclear power plants and electric power generators. It can also find unwanted objects in the path of high-speed railcars.

Cameras acting as sensors at the edge capture such abnormal patterns. Artificial intelligence (AI) helps them understand what they see by issuing alerts on a variety of preconfigured events. This enables enterprises to proactively put out fires, save time and money, and even save lives.

AI Video Analytics from Edge to Cloud

Irisity, a software video analytics provider, delivers precisely such solutions to customers in various market segments through its IRIS+ platform, an AI-powered video analytics solution, says Zvika Ashani, Chief Technology Officer of Irisity.

IRIS+ is an IoT solution, where smart devices are deployed at the edge. A small form factor PC serves as the edge device and routes data to a centralized back end. The server runs the IRIS+ software, which is based on AI in the computer vision and video processing domain.

#ComputerVision and #video processing are exactly what a large U.S. electric utilities company was looking for when it deployed the IRIS+ solution. @irisitycorp via @insightdottech

Computer vision and video processing are exactly what a large U.S. electric utilities company was looking for when it deployed the IRIS+ solution. The utility has hundreds of unattended sites across the country, which makes intrusion detection a key concern.

IRIS+ was not the first video analytics solution the company had field-tested but was the one that best suited its requirements. “Our architecture with just a small PC in every remote site is just what they need, and they have a network going back to their data center,” Ashani says. Because of the large number of cameras used, it was necessary that the AI solution would be accurate, so it wouldn’t cry wolf too often, issuing false alerts.

The edge solution also needed to work in wide-open outdoor environments. Implementing IRIS+ has helped the utilities company proactively detect intrusions and decrease deployment of expensive truck rolls. Occasional manual patrols were not helping as they missed incidents that happen between inspection intervals.

Smart city customers like the utility company can choose to run the IRIS+ solution on-prem as they did, or through a SaaS portal after connecting their cameras. In either case, Ashani says, Irisity ensures privacy of video imagery. “We encrypt any images we store and harden our own cloud solution as well as conduct regular penetration tests to ensure that the system is not vulnerable,” says Ashani. As privacy laws and compliance regulations vary from country to country, Irisity’s distributors and systems integrators work with their customers to ensure that the solution deployment conforms to local data privacy regulations.

Test-Driving Solutions

Irisity test-drives its AI solutions in Intel’s AI Solution Validation Lab, which enables developers to virtually tinker with the hardware and chipset combinations that work best for the problem at hand. “We needed to test various configurations in terms of processor models and memory configurations,” Ashani says. “It would be really expensive to set up in our own lab,” he says. Using Intel’s resources helps cut costs, save time, and deliver the right solution faster. Irisity uses 3rd Gen Intel® Xeon® Scalable processors in cases where the AI solution will handle a high density of camera sensors.

The company also uses the Intel® OpenVINO toolkit, which “helps us get the highest performance from an Intel® processor-based platform with minimal effort from our algorithm developers,” says Ashani. “They focus mostly on collecting the data, training our models, and then getting that model to run in an optimized way on the CPU. OpenVINO decreases time to market and reduces our development costs.”

The Future of Video Analytics

Expect video analytics software to become increasingly accurate in object detection and in analyzing and understanding behavior and attributes. “All of these things require a lot of computational power and as processor generations move forward and we get more power at the edge, they will directly impact the types of applications we can run,” says Ashani.

The field is also evolving to where data connected from multiple cameras can cover larger swaths of wide-open areas. Smoke and fire detection outdoors is a potential application of such capabilities. A series of unmanned gas stations was seeing fires break out and incur a lot of damage, Ashani points out. But adding the IRIS+ solution as a software layer on top of existing cameras helped deliver alerts proactively and avert disaster, he says.

Enterprises and businesses of all sizes might not be able to be everywhere all at once. But by leveraging the feeds from a network of cameras and layering intelligence on top of that video data, they can be at the right place at the right time.

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Opening the AI Floodgates with No-Code Computer Vision

Computer vision is expected to periodically increase organizations’ ROI while maintaining workplace safety over the next couple of years. This is because of its ability to provide incremental business productivity, streamline processes, track downtime, lower operational expenses, and keep an eye on safety regulations across all enterprises.

But bringing these solutions to life is not an easy task. To operate successfully, a computer vision system needs three things: enormous amounts of image data, cost-effective product pipeline appropriate algorithm selection, and repeated layers of optimization. To do this work successfully, organizations need in-house expertise, which many lack, and hiring help can be expensive, according to Harshal Trivedi, founder and CEO of no-code computer vision and video analytics platform Tusker AI.

And despite a majority of business leaders believing computer vision can save them time and money, once a skilled team is assembled, it often takes them months to train algorithms for a single use case. Even after all that work, many projects die in proof-of-concept trials, foiled by unforeseen data problems or implementation issues.
Thankfully, companies like Tusker AI are evolving technology to simplify the development process, enabling even people with no coding or AI skills to quickly create, produce, test, and scale effective computer vision solutions. And as smart vision systems become more widespread, faster development could give organizations a significant edge.

DeepTech No-Code Computer Vision

Many organizations already collect thousands of images in CCTV security cameras. To simplify application development, Tusker AI created a tool that uploads these images onto its platform, where companies can click to choose their own data-driven solution model—or several models—and immediately start training algorithms on the images/videos. Then they click another button to test the models and review their accuracy. If a solution looks appealing, they click Deploy.

“Upload, train, deploy—three clicks and you’re good to go,” Trivedi says.

No coding or AI expertise is necessary, which helps businesses save time and money. The Tusker AI video intelligence platform simplifies computer vision by removing overhead and providing a user interface automated engine, which helps design, deploy, and understand data sources to meet business needs. In addition to automating and standardizing model design tasks, Tusker AI includes complex queries to improve business alignment, ensure data integrity, and simplify integration.

“The result is faster insights and business impact, fully automated business processes, and the ability to rapidly upskill,” says Trivedi.

Completed AI models can be available in hours, instead of months, freeing developers to focus on other projects. Efficiency is enhanced by the platform’s use of the Intel® Distribution of OpenVINO Toolkit, which contains many building blocks for assembling image recognition and deep learning models.

“Intel OpenVINO speeds development, optimizes performance, and reduces costs,” Trivedi says.

With Tusker AI, companies can also create additional models or add new images without extra charge. Tusker AI’s deep learning models can identify a range of concepts, including objects, incidents, emotions, and predictions for industrial-grade deployments.

“Once the pipeline is there, you can automatically scale. Whether you have one image or millions, the cost per camera is the same,” Trivedi says.

A Kaleidoscope of AI Models

AI models can also be tweaked to accomplish a nearly infinite variety of specific tasks more efficiently. Tusker AI’s vision intelligence platform has made the automation of quality control, visual inspection, defect identification, and assembly line optimizing possible.

As #SmartVision systems become more widespread, faster #development could give organizations a significant edge. Tusker AI via @insightdottech

For example, at a large, monthlong religious festival in India, organizers used Tusker AI to help volunteer staff manage the event’s 14 million visitors.

“In the past, there were many management problems. No one knew when to expect the crowds,” Trivedi explains.

Organizers used smart video cameras and developed an AI model for them to notify parking attendants ahead of approaching cars, helping them reroute traffic and guide more than 50,000 vehicles a day into spaces in each lot. Cameras were also installed at entry gates and pavilion entrances, where an AI model counted visitors, helping staff manage crowd flow. Other cameras were placed at food stands, where an algorithm tracked food sales and correlated the information to inventory, ensuring that supplies never fell short.

In the industrial industry, computer vision models play a key role in workplace safety. For example, at a heavy-machinery manufacturing plant, one or two workers were injured every year after stepping too close to a machine containing a powerful, air-sucking fan—despite a bright yellow line painted on the floor warning them not to cross it when the machine was running.

The plant created an alerting system with Tusker AI, using smart cameras to notify the floor manager whenever a worker steps close to the line. If anyone crosses it while the machine is operating, an audible alarm is sounded. This simple system is preventing serious, costly accidents and work stoppages at the plant, Trivedi explains.

In warehouses and factories, AI models can also be created to warn managers of incipient problems, saving money and improving safety even further. The computer vision algorithms may be added to—or replace—the third-party video monitoring services many companies currently use. These services record events, aiding forensic investigations, but many don’t provide alerts to help companies prevent incidents as they occur, according to Trivedi.

Preventing problems was a high priority for an ecommerce delivery firm bearing the cost of 100 to 300 damaged packages every month. The company used Tusker AI to create monitoring algorithms for its warehouse video cameras to make sure workers were following proper procedures. The system alerts a manager if goods are thrown, loaded improperly, or otherwise mishandled. Since the system was installed, monetary losses and complaints about packaging have decreased by over 50%.

Because video cameras collect large amounts of personal information, Tusker AI uses multiple controls to ensure data privacy. When images are selected for analysis, only their relevant metadata is transferred, processed, and stored. The metadata is encrypted in transit and at rest, and is subject to access controls and regular security audits. Compliance with regulations such as Europe’s General Data Protection Rule (GDPR) and the California Consumer Privacy Act (CCPA) is built into the platform.

The Future: Computer Vision Models Everywhere

The spectrum for secure, easy-to-use computer vision models is broad, but it’s nothing compared to the smart video tsunami to come, Trivedi believes.

“We will be seeing self-driving vehicles, robots, drones, and many new metaverse, virtual reality, and augmented-reality applications. These systems cannot operate without visual analytics,” he says.

Though advanced solutions have major hurdles to cross before going mainstream, many companies are plowing time and money into AI vision research. When their products launch, Tusker AI hopes to help customers customize them without having to code.

“We are planning to build our platform to take automation to the next level,” he says.

 

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

All-in-One Kits Power Machine Vision in Manufacturing

Manufacturers are eager to implement machine vision solutions in the factory. The technology can improve product quality, enhance operational efficiency, lower costs, and help companies to scale. But machine vision can be a double-edged sword for solutions providers and SIs that serve the industrial sector.

“On the one hand, you have this powerful, sophisticated technology that solves enormously complex problems,” says Patrick Ye, Vice President at Shenzhen Seavo Technology, a specialist in hardware platforms for smart manufacturing. “But it requires development expertise and computing power to implement. And machine vision solutions should be adaptable to multiple use cases and configurable post-deployment.”

But that means solutions providers need to source unusually flexible and scalable solutions to remain profitable. Fortunately, all-in-one, ready-to-deploy machine vision kits are now available. Powered by the latest hardware and software platforms, these kits offer powerful, reliable AI processing via a flexible, scalable platform.

The rise of #MachineVision kits is a #technological trend—and like so many developments in #AI, it is driven by stakeholders’ business and operational needs. Shenzhen Seavo Technology Co., Ltd. via @insightdottech

Overcoming Barriers to Machine Vision in Manufacturing

The rise of machine vision kits is a technological trend—and like so many developments in AI, it is driven by stakeholders’ business and operational needs.

Manufacturers, for their part, are attempting to solve thorny technical problems: optical character recognition (OCR) in physically demanding factory environments; complex defect recognition that requires far greater precision than in the past. But the kinds of machine vision solutions that can meet these needs rely on sophisticated AI technologies like deep learning—and that requires unprecedented processing power and extensive development work to implement.

In addition, says Ye, the market is looking for machine vision platforms that are highly flexible and scalable:

“These are solutions for innovative businesses that want to grow. But because of this, these solutions need to be extremely flexible—in the sense of reconfiguration after implementation—if a manufacturer needs to change a process, open a new product line, or scale up. Modular design is also preferred in order to avoid the problem of vendor lock-in.”

The need for sophistication, power, and flexibility also presents a challenge to solutions providers and SIs. Namely, their buyers are hungry for machine vision solutions—but the solutions they want are costly and time-intensive to develop.

Enter the machine vision kit: a modular, scalable, industrial-grade kit that offers a range of configuration options to meet varying requirements for computing power, I/O, form factor, and functionality.

These kits are a win-win for everyone involved. They shorten time to market and lower development costs for solutions providers and SIs—and offer manufacturers a powerful, flexible, and easy-to-deploy machine vision solution.

A Synergy of Software and Hardware Technology

The key to building a platform that addresses multiple needs at the same time is combining best-of-breed hardware and software into a comprehensive solution. To accomplish this, Seavo partnered with Intel to develop its machine vision kit—using Seavo hardware designs along with Intel processors and software development tools.

The kit’s different components and architectural features deliver outsized benefits to users:

  • Intel processors provide reliable, high-performance computer vision processing from low-power consumption settings to high-power applications like deep learning and edge AI model training.
  • Modular design shortens the development cycle, reduces costs, and simplifies maintenance and upgrades for the end users. Extensive I/O options support high-definition IP cameras and a range of display options for human-computer interaction.
  • Scalable design with via a range of Seavo’s kit models, which include various expansion slots to handle data storage modules, networking, motion capture and control, and communication.
  • Intel® Edge Insights for Industrial (Intel® EII) delivers accurate image recognition and enables real-time control and management through rapid image acquisition, processing, and analysis at the edge. Intel EII includes support for edge image and video processing, pre-trained models for analysis, edge video inferencing, AI acceleration, and development toolkits designed for deep learning applications.
  • The Intel® OpenVINO toolkit speeds up the development of machine vision applications and enables AI optimization.

Ye says that Intel technology is instrumental in bringing Seavo’s solution to market: “Intel’s processors are ideal for machine vision applications, and its AI software development tools are a huge help in speeding development.”

New Business Opportunities for SIs and Solutions Providers

In the coming years, machine vision kits will offer attractive business opportunities to SIs and solutions providers and help more manufacturers take advantage of the benefits of computer vision technology.

The power and simplicity of these kits will enable solutions that address common manufacturer pain points—and their flexibility will hedge against the constant change that is the hallmark of Industry 4.0. The fact that they offer rapid development and easy customization will also be attractive to SIs and solutions developers eager to take advantage of opportunities in the burgeoning market for machine vision solutions.

“All-in-one kits are going to be a massive driver of machine vision adoption in the industrial sector,” says Ye. “They offer enterprises a way to deploy machine vision-powered solutions quickly, and they give SIs and solutions providers a fast route to market. By uniting the best in machine vision hardware and software, these kits have become something greater than the sum of their parts.”

 

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.