HMI 2024: Bringing Manufacturing to a Digital, Edge AI World

Manufacturers looking to achieve Industry 4.0 must prioritize the digitization, AI integration, and the sustainability of their operations and infrastructure. These three trends were evident at Hannover Messe (HMI) 2024 , which took place April 22 to April 26 in Germany. HMI is known for bringing together thought leaders, exhibitors, and attendees from all over the world to exchange ideas about the latest industrial technologies.

At the event, Intel® Partner Alliance members demoed their new industrial transformation solutions, powered by Intel, that advance manufacturing to the next level.

“The key themes we are seeing: We’ve got sustainability going all over the place, we have software-defined manufacturing, AI is everywhere—it’s predominantly everywhere—and you see all kinds of robotics demos. Lots of really great things are going on,” says Jonathan Luse, General Manager of Industrial Solutions Management at Intel.

Empowering Industrial Innovation with Edge AI

At HMI 2024, technology solution provider Dell introduced the latest Dell NativeEdge Blueprints for Intel Edge AI Services, leveraging the Intel® OpenVINO Developer Toolbox (Video 1). The latest release is designed to increase flexibility and options when deploying and managing AI and machine learning applications at the edge.

Video 1. Pierluca Chiodelli from Dell Technologies and Muneyb Minhazuddin from Intel demo the role of edge AI in manufacturing at HMI 2024. (Source: insight.tech)

“By integrating OpenVINO Developer Toolbox into NativeEdge Blueprints, we are empowering businesses to unlock the full potential of AI at the edge, optimizing operations and paving the way for new, intelligent applications,” says Muneyb Minhazuddin, Vice President and General Manager for NEX Compute and Edge AI Software at Intel.

Dell NativeEdge Blueprints also runs on manufacturers’ legacy brownfield embedded systems, providing greater opportunities for innovation.

The use of generative AI was also on display at the event, with Siemens, an industrial manufacturing solution provider, debuting its Industrial Copilot. The generative AI-powered assistant can generate complex automation code so engineers can reduce development time and decrease complexity.

Driving Toward Sustainable Manufacturing

Siemens showcased other products at the event with a focus on sustainability. The company proudly talked about Siemens EcoTech, an environmental product performance label designed to provide more transparency about the environmental impact of products. EcoTech assesses product lifecycle performance based on an Environmental Product Declaration to see how a product compares in areas such as materials, design, use, and end of lifecycle (Video 2).

Video 2. Siemens details how it leverages Intel technology to provide high-performance industrial-grade hardware to customers at HMI 2024. (Source insight.tech)

Elsewhere on the show floor, global edge-to-cloud company HPE showcased how it uses Intel technology to provide an AI-enabled platform that brings together IoT, manufacturing, and enterprise data to provide more transparency, predictability, and control for various partners.

To highlight the event’s theme of “Energizing a Sustainability Industry”, the company partnered with Iceotope, a provider of liquid cooling solutions, to display a liquid-cooled edge server. Powered by its ProLiant DL380 servers and Iceotope’s Precision Liquid Cooling, the solution is built to provide energy-efficient, zero-touch computing in hard industrial environments and extreme temperatures (Video 3).

Video 3. HPE highlighted how, with Intel technology, it’s empowering manufacturers to be more sustainable, performant, and predictive at HMI 2024. (Source insight.tech)

In addition, HPE demoed how manufacturers can stay on top of their industrial operations with better insights and predictability through an AI-powered digital twin with its OEM partner Bosch. Together the two companies demonstrated how manufacturers could create virtual simulations to view operations and test changes before going to production.

Other partners at the event included NexAIoT and NexCOBOT, subsidiaries of global IoT solutions provider NEXCOM International, which focused on next-generation functional safety robotics solutions; NexCOBOT also received the TÜV Rheinland Functional Safety certificate at the event for an x86 safety control platform it co-developed with Intel; Beckhoff, an automation technology provider, which displayed how its TwinCAT Core Boost is providing greater computing performance in real time with Intel® Core processors; and digital automation company Schneider Electric, which had a software-defined manufacturing solution on display, designed to help manufacturers improve their efficiency and resiliency.

Discover how Intel and its ecosystem partners help manufacturers reach industry 4.0 and their sustainability goals with on-demand content available on the Hannover Messe 2024 website!

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Thermal Imaging + Vision Software Detect Defects Inline

For too long, manufacturers have been throwing good money after bad. Having a defective product roll down the production line and pass routine visual inspection only to have it be returned later for rework or refund is expensive and wasteful. And destructive testing of one component out of 100 hardly inspires trust that the entire lot is defect-free.

Instead of a patchwork postmortem inspection process, manufacturing is turning to computer vision solutions that can catch problems the human eye might miss. Despite the increasing popularity of these methods, they are not without their own challenges, says Jonathan Weiss, Chief Revenue Officer at Eigen Innovations Inc., developer of industrial machine vision solutions.

For one thing, many machine vision solutions don’t play well with in-house software, so they are siloed. And often the algorithms address only one specific use case in a closed system, which limits their implementation for other applications.

To address the problems associated with inspection, manufacturers need adaptable, vendor-agnostic computer vision solutions. Deployed inline, they can detect defects right at production time before these problems snowball into larger headaches. The Eigen centrally managed OneView Machine Vision Software makes it possible.

Such inline inspection forms the backbone of the Eigen OneView Quality Inspection for Metals solution, managed by OneView Software, and primarily leans on thermal cameras and uses machine learning models to understand what heat signatures look like for correctly executed industry processes. Working with such a knowledge base helps OneView detect problems happening in real time, when metals are welded, plastics extruded, or when materials go through a range of manufacturing steps.

Detect problems happening in real time, when metals are welded, plastics extruded, or when materials go through a range of #manufacturing steps. @EigenInnovation via @insightdottech

Industrial Applications for AI-based Inline Inspections

Eigen customers can adapt the solution to a number of related applications like injection molding, welding, or adhesive manufacturing processes. In each case, factory teams use OneView to build AI and ML models that learn different types of inspection paradigms.

Case in point: Henderson Stamping, a Tennessee-based manufacturer, struggled with accurate defect detection in components it produced for Whirlpool. The very thin shiny surface film that helped safeguard parts from scratches and dents also prevented thorough manual inspection. As a result, a small but significant percentage of components that were being shipped turned out to be defective. “This can become very problematic for manufacturers who have agreements with their customers, where fines can be imposed for shipping defective goods,” Weiss says.

Eigen helped the company develop a custom inspection solution that leverages the principle of deflectometry. The procedure involves shining light on the surface of the metal and looking for surface defects by evaluating the resulting light patterns. Henderson now inspects all of its components using the OneView managed solution and has significantly reduced OEM recall rates.

Similarly, a manufacturer of large metal grates wanted to ensure its welds were strong enough. Post-production testing involved putting the grates through a torque machine that applied pressure to find weak points. Using OneView software and multiple thermal cameras, the manufacturer can conduct inline testing of all weld points at every single cross section. The software stitches multiple camera images together to create one composite image and pinpoint problems.

The size of the defect that can be detected depends on the sensitivity of the cameras used, but in most cases, those that are a millimeter or larger are a slam dunk, Weiss says.

Computer Vision Leads to Operational Efficiencies

OneView is about more than just detecting defects. “We take it a step further and also show process data. So we help manufacturers not just see that there’s a defect visually, but we also ultimately help them understand the root cause. Not only are we telling them they have a bad product, but we’re also showing them exactly what shifted or drifted in the process that an engineer now needs to fine-tune,” Weiss says.

OneView provides complete traceability so manufacturers can mitigate warranty claims and find a variety of applications for both cost savings and improved efficiency and customer satisfaction.

There’s also a sustainability advantage in detecting defects inline. Shipping faulty products only to have customers return them increases the associated carbon footprint. Catching problems early on in the manufacturing cycle leads to less carbon waste, too. “We’ve developed complete case studies just on the CO2 footprint reduction that we’ve helped companies with, and it actually extends well beyond the footprint of the factory,” says Weiss. “You’re talking about hundreds of thousands of tons of CO2 essentially that can be saved depending on the production footprint.”

Open Technology and Tools Enable Flexible Deployments

Inline defect detection comes down to work that must be done in a matter of seconds, which is why Intel technology is especially important to vision solutions designed and managed with OneView. Those time constraints can be a significant challenge, and the Eigen team found that the Intel® OpenVINO toolkit helped it achieve the speed needed to operate. The performance that OpenVINO unlocks and the speed at which it can inference images is one reason why Eigen includes Intel hardware and software as a “core part” of its technology.

In addition, Intel helps Eigen hit its differentiation metric in being able to deliver flexible deployment options. “We want to be as hardware-agnostic as possible when we provide our solutions, and so OpenVINO became a key part of our architecture because it allows us to support a very wide range of hardware options,” Weiss says.

Eigen has an internal engineering services group that sometimes functions as a systems integrator, although it also works with a network of preferred systems integrators. Eigen works with SIs who implement the solution blueprint that the company draws for clients. Collaborating with SIs is a key part of the company’s strategy as it helps unlock deployment scale—especially for larger customers.

A Must-Have for the Future of Industrial Automation

In the future, expect these machine learning models to get more accurate and deliver better results with fewer training images.

“Our sweet spot is helping folks use thermal applications to see what they otherwise can’t see,” Weiss says. A whole range of processes in a whole range of specialty industries qualify.

The future, Weiss forecasts, will move AI and computer vision-powered inline inspection to a must-have instead of a nice-to-have. Using such inspection tools also helps manufacturers decrease workforce turnover rates as employees now need to understand machine readings instead of visually inspecting products.

The decrease in waste and the cost savings delivered make such solutions a no-brainer, and no longer will manufacturers throw good money after bad.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

AI for All: The Power of Democratization and Collaboration

Gone are the days when artificial intelligence and computer vision were exclusive to tech giants. These powerful tools hold immense potential for businesses of all sizes. What remains limited is the skill sets required to bring some of these AI solutions to market.

In this webinar, we uncover the significance of democratizing artificial intelligence and computer vision, as well as explore how partnering with the right allies can propel AI initiatives forward.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guests: Bravent and TD SYNNEX

Our guests this episode are Mario Lopez, Chief Innovation Officer at Bravent, an IT services and consulting company, and Michael Nelson, Field Solutions Architect for TD SYNNEX, an IoT solution aggregator.

Mario has more than 10 years of experience as a computer engineer, having first joined Bravent in 2013 as a Junior Developer. In his current role, he focuses on artificial intelligence, machine learning, IoT, and mixed reality projects.

Michael has been with TD SYNNEX for more than 27 years, supporting both internal customers as well as resellers and their customers—assisting with hardware and software to solve technology challenges.

Podcast Topics

Mario and Michael answer our questions about:

  • 2:58 – The transformative power of AI and computer vision
  • 7:03 – Challenges and limitations to building AI solutions
  • 11:37 – Bringing partners and technologies together
  • 14:19 – What to consider and evaluate in a partner
  • 15:34 – Real-world examples of democratizing AI
  • 22:24 – Making AI development more accessible
  • 26:58 – A look at future AI innovations

Related Content

To learn more about democratizing AI, read Supercharge Your Computer Vision Journey with the Intel® Geti Platform  and The Power of IoT Partnership: Work Better, Together. For the latest innovations from Bravent, follow them on LinkedIn. For the latest innovations from TD SYNNEX, follow them on Twitter at @TDSYNNEX and on LinkedIn.

Transcript

Christina Cardoza: Hello, and welcome to the IoT Chat, where we explore the latest technology advancements and trends. Today we’re talking about the power of AI collaboration and democratization with Mario Lopez from Bravent and Michael Nelson from TD SYNNEX. So before we get into the conversation, as always, let’s get to know our guests. Mario, I’ll start with you. What can you tell us about yourself and Bravent?

Mario Lopez: Hello, Christina, I’m very good, thank you. Thank you for this opportunity and thank you for participating in this podcast. I’m Mario Lopez. I’m working as Chief Innovation Officer at Bravent. I am normally leading the most innovative projects where we are using the latest technologies such as AI, mixed reality, digital twins, and those things.

My colleagues told me that I have probably the best job in my company because I’m working always with all the toys that we have in our company, also with the most exciting customers. And I’m very proud of that, because sometimes I have very good opportunities to meet those customers, also working directly with them in their locations. And that’s something very, very good. So, thank you. It’s a pleasure to be here.

Christina Cardoza: Absolutely And I would agree with your coworkers: technology is always changing and it’s always advancing. There’s always so many new cool things that you can do with it. So, very excited to get into a little bit about what you guys are doing and how you are working with partners like Intel and TD SYNNEX and make some of this happen.

But before we get into that, Michael—want to welcome you to the show also. What can you tell us about yourself and TD SYNNEX?

Michael Nelson: Well, I’m Michael Nelson, as you said. I’m a Field Solutions Architect here at TD SYNNEX. I’ve been covering Intel and supporting all their initiatives now for 26 years here. I support both our internal customers as well as the resellers and their customers. So, I cover the gambit from end user all the way up to our salespeople and anywhere in that area.

As far as TD SYNNEX: I mean, they’re a leading channel supplier and have been for over 40 years. They’re a behemoth worldwide distributor. We provide products and services for every level of the IT supply chain.

Christina Cardoza: Great. Look forward to getting into it. And from that technical aspect, getting a little bit deeper into what you guys are doing and how, with Mario, how you guys are making some of these solutions happen. So, thanks both for joining.

Mario, I want to start the conversation with you. You mentioned how you get to work with AI and all of these transformative technologies. So, wanted to start off the question, because obviously we are talking about the power of AI collaboration and making it accessible to more people and to more businesses. So why do you think AI specifically and computer vision—they’ve become so transformative and so impactful across all of these different kinds of industries?

Mario Lopez: That’s a very interesting question, to be honest, because AI, in my humble opinion, I think that is transforming all the things that we have around us, all the companies, the process, whatever we have around our lives. It’s true that AI—I think that in the last year in my opinion—is starting to be the main topic in every conversation. Because probably with appearance of the generative AI, we are democratizing even more AI in all the companies. And also—I always talk about my family—for example, my parents never used AI, and now with the appearance of the generative AI with GPT and those things, for example, I was explaining to them how they were able to use ChatGPT, asking some questions and getting answers in just few seconds. And the right answer—that’s probably the most important thing.

So I think that the AI is starting to be very important, because it’s able to give us the right answer and probably the right predictions or the right anticipation to the errors that we can face in our company. And that’s something very important.

And with computer vision it’s more or less the same, because computer vision is allowing us to interpret it and understand what is happening around us in our city, in our life, in our company, in our jobs—in whatever we have around us. With computer vision and with just a camera we are able to understand everything that is happening and obviously do the right task that we need after we see that it’s happening, something, through the cameras.

I remember when I was starting to study my degree, I was investigating some technologies about computer vision. And I remember that it was very complicated to use it, because just to do something very simple we needed to program a very long software with functions that were very complex. And now everything has changed; now with just a normal camera, even with my laptop or whatever I have, and with just a software and two clicks I can get a model running and understand, for example, what is happening in my room or, as I said, in every space.

So that’s something that in my opinion is changing everything, because now it’s very easy to use AI or computer vision in my company and also very cheap to implement it.

Christina Cardoza: Absolutely. I’m sure it’s a little frustrating seeing all the new tools out today, that if you had that just a couple of years ago would’ve made your job and your life a little easier. But it is exciting to see how much we can adopt this, now that it is becoming easier to implement. And you make some great points about the generative AI and how it’s becoming more accessible to everybody in their everyday lives. Consumers can start playing around with it and experimenting with it.

And I think even before generative AI we’ve been having AI and computer vision impact our lives without even noticing. I’m just thinking about my car, for example, like backing out of the driveway or merging. There’s always those computer vision alerts saying you’re getting too close to a car or your car is going to be hitting something on the way out. Things like that, and I rely on it a lot.

But I’m sure even with these new technologies coming out, making it easier for us to implement and play around with it, there’s still some challenges to building these AI and advanced computer vision solutions. Can you talk about some of the complexities of the limitations and challenges that businesses still face today?

Mario Lopez: Well, one of the main problems that we sometimes have with customers or with projects that we are working on is probably the data. Always when we are talking with a customer, we explain that the most important thing to start working with AI is the data. If you don’t have data, you cannot do AI. And that’s the first thing that you must know, because you need to prepare everything: you need to prepare your company, your process and everything. And when you have the data ready, you can start working on AI.

It’s true that now with some of the services that we have available on the market, and even more from Intel, we can start working on AI, but you need to adapt your company, your process, and those things to be able to use it. So this is one, in my opinion, one of the main challenges that we face.

The other one is how we need to change the process with the technology and the hardware that is needed to do the AI. Again, that’s something that is changing, and it’s much easier than I remember from two years ago, for example, because we have new services that are making our life easier. But some of the clients, when they need to install or they need to put some new hardware in their factory, for example, in their company—sometimes it’s a bit complicated.

And the last one that I think it improved a lot in the last years, in my opinion, I think that is the cloud computing. Some of our customers, when we start talking with them, they don’t want to use the cloud. They don’t want to send any data to the cloud; they want to process everything on the edge. And that’s something that is normal, because I understand that their data is part of them and they don’t trust how the data moving around to the cloud or moving through the data centers. And that’s something that has changed a lot in the last years, because now with the hardware that we have available to do edge computing it’s enough, and probably it’s even better than what we have on the cloud.

For example, we have in our company a solution to do computer vision. And when we started with this solution, when we started making some tests with our customers, we were doing everything on the cloud, and it was impossible to use the solution on the cloud because of the delay sending information to the cloud, getting the answers, and those things. And when we started working with edge computing, everything changed. And after that we started to have good results, real-time results. And that’s something that in my opinion is very important. And that’s how we are now implementing those solutions—using AI on the customers with edge computing.

Christina Cardoza: Yeah, I’ve seen edge computing be closely coupled with AI now for that low latency you mentioned—that performance at real-time analysis so that businesses can be more informed about the decisions that they make and then also be able to make these decisions in real time. So it’s really exciting to see all of these things happening.

You mentioned a couple of things that you need or that businesses need to implement AI and computer vision solutions. We’re talking about the edge, cloud; there’s hardware, there’s software that goes into it. And then not to mention the security aspects of all of this. So there’s a lot that I think it can feel overwhelming or that it looks like it’s too complex or complicated to make some of these initiatives and efforts towards AI and computer vision in your business.

But I think maybe what we forget sometimes is that we don’t have to go about it alone. The right partnership can help empower some of our innovations and help deploy some of these solutions. So, Michael, coming from a TD SYNNEX standpoint, I’m curious how you guys work together with other companies, how that partnership can really bring some of these technologies together and support businesses’ and clients’ AI initiatives.

Michael Nelson: Well, that’s kind of the specialty of distribution these days, is we have a large breadth of product specialists that our resellers can use to fill in the knowledge gaps that they may have. So when they go to deploy these solutions, they’re very rarely one vendor all the way through, it’s typically multi-vendor.

And that’s where we come in as the shining white knight, is we can, again, fill in those knowledge gaps and guide them to an actual solution—as opposed to: here’s a bunch of products that kind of work together and then you figure it out, we will get you to your end result on the first try. That’s really the strength of what we bring to the table.

As far as with Intel specifically—again, Intel has a very far reaching goal here of democratizing AI and putting it everywhere. And part of that approach using their oneAPI and then some of their other software platforms that are coming out very soon—you’re going to be able to write your model, your platform; you’re going to write it once and you’re going to run it everywhere. So it won’t matter which hardware you decide to use. If you develop with the Intel ecosystem of software, it’s going to work everywhere.

Christina Cardoza: I love that relationship that TD SYNNEX provides. You mentioned there’s a lot of products and solutions that go into this, and it can feel a little piecemeal or overwhelming to manage. And some of these things can become siloed when you are working with AI and data is really important. You can’t have these things be siloed or not talk to each other.

Michael Nelson: Yeah. In the past, if you’re going to deploy some sort of visual-AI solution, it would’ve been one vendor, usually a camera manufacturer. And it was going to do whatever one job it was designed to do, and it would be siloed. That’s all it did. Now you’re going to use off-the-shelf cameras, the servers you already own, the infrastructure, networking infrastructure you already own; and you’re going to bring all that together and do multiple solutions. So it’s powerful. It really democratizes AI.

Christina Cardoza: Absolutely. And I think you mentioned that in the past it’d be one vendor, and now we have multiple different vendors. I think sometimes businesses, they get worried about who they choose to work with or what solutions they choose to bring in, because now that it’s such this open ecosystem they don’t want to be stuck with a vendor that they can’t innovate or they can’t move forward.

So I’m curious, when companies and businesses are looking for these partners or looking to bring in these solutions, what should they be considering? Like, what should they be evaluating, especially when it comes from that collaboration aspect?

Michael Nelson: When evaluating a potential partner for collaboration, there’s a lot of critical factors that come into play. I’d say some of the top ones are shared vision and goals. You also need to have complementary strengths. There needs to be a level of trust and reliability, open communication. And probably in my field the most important is measurable outcomes and success metrics. Because if you don’t have that, you can’t guarantee the result. You need to remember that successful partnerships are built on mutual respect and shared goals, effective communications. Choose your partners wisely, and together you can achieve remarkable results.

Christina Cardoza: That’s all great, and I think it highlights the value of TD SYNNEX, the expertise that you bring, that you can help businesses select these products or help businesses deploy and bring these together and make the right decisions for them. So I think that’s really powerful and important.

Mario, I’m curious, because we were talking about some of the ways that you guys are helping businesses bring this to market, so if you had any customer examples or case studies that you can share of how partnerships like TD SYNNEX and Bravent can really support AI initiatives and efforts for businesses.

Mario Lopez: Yes, absolutely. As I was saying before, one of the solutions that we created and also we are offering to our customers is a solution where we are using computer vision to do quality control, quality inspection, and ensure that the quality of the products is fine. That’s a solution that we created around two years ago, more or less. And we started working with John Deere.

It’s important, because we started with them just working on a POC, on a pilot, just to see if with the technology that we had in that moment we were able to solve the quality controls or to ensure the quality of the products at the factory in the production line. That’s something that, before using this solution, it was taking a lot of time for the customer before starting with the manufacturing process, because they needed to prepare all the parts, all the different parts, that they were using in the production line. And also after putting together all the different parts, they were spending or using a lot of time just to ensure the quality of the product.

So we created a solution that—only using a camera in the production line and also Intel software and Intel hardware—we are running a model that is able to analyze what is happening in real time and is able to provide feedback in real time to the operator to ensure that every part and every step was done in the right way. So that’s something very important for them—and obviously very important for most of the customers that we are offering this kind of solution to—because it’s saving a lot of money and a lot of time. But at the end it’s improving the customer success, the customer experience. Because if you don’t need to use your warranty or if you don’t have any problem with the product at the end, you will be very happy with the brand, with the product that you are getting.

So that’s, in my opinion, I think a perfect example of how we are applying AI and how we are improving a real process just using computer vision and just using AI with a very simple solution. Because, as I said, this is just a camera with a PC, with a computer, a normal computer that is running an AI model and just getting the results and, at the end, improving the quality of the products.

Christina Cardoza: Great. And, Michael, since you are an engineer, I assume you’re a little bit closer to the ground, so to speak, of getting these systems implemented or making sure everything is running smoothly. So I’m curious if you have any examples, or if you can walk us through the process a little bit of what it takes to bring AI initiatives to market, and if there’s anything that goes wrong in the process or anything that you can highlight.

Michael Nelson: Yeah. I think we mentioned earlier the Geti software platform that Intel has just launched. I was shown a demonstration of that at one point, and then two months later they sent me a license so I could install it and give it a try myself. I worked with their sample data for about an hour and a half, trying to recreate the demo that they showed me where they built a model in about 10 minutes. Like I said, I was working an hour and a half and it still wasn’t happening.

And what it taught me was that, even though the tool was very easy to use, you still have to really understand the data, make sure you have the right data, right pictures. Basically, I picked the wrong task. So, I chose, with the images they gave me, to do identification, when I really should have been just doing anomaly detection. And so I had the wrong sample data.

Once I changed the type of model I was building, I was up and running in 15 minutes. I was literally able to take the exact same model I was already working on, make a couple adjustments, and all of a sudden it just worked. So the Geti platform is really powerful in that it allows you to take your work and then to optimize it. So even though I completely started out wrong because I didn’t know what I was doing because I’m not a data scientist—I’m literally just reading all the menu options and the little help tips trying to figure out how to build my first AI model—but it allowed me to do that once I got past my knowledge gap of what I’m actually trying to accomplish; it took me 15 minutes to have a model that I could deploy.

And the reason why that’s so important is over 40% of models never get that far. And 70% of AI projects die at this stage. And I was past it in 15 minutes—an hour and 45, technically. But again, most people jumping into AI aren’t going as cold as I was. They typically hire people that have done it at least one time before they start. So that’s my personal experience with AI, as far as getting started, that having the right tools makes a huge difference.

Christina Cardoza: Yeah. And I think your personal experience, that’s a great example of we’re democratizing AI; you’re not going to get it on the first try. And that’s why it’s important that we have these agile environments that give you an opportunity to experiment, to fail, and then learn from those mistakes and really implement this.

So I think that’s a powerful point, is that it’s not going to happen on the first time and you need to work out some kinks and it’s not just the tools—and that’s the power of partnerships too. You can have all the best tools in the world, but if you don’t know how to work them or you don’t have a partner like TD SYNNEX or Bravent helping you along the way, the tools aren’t really going to do that much for you. So I think that’s a great example, Michael, so thanks for sharing that.

And obviously we talked about Intel Geti—you have an Intel polo on right now, and this is an Intel-sponsored podcast, as well as insight.tech publication. We’re owned and sponsored by Intel. But I think, in spite of that, they do provide those tools and that partnership and that expertise to really work with the tools.

And so I’m wondering, from the Bravent side, what has been the value for your team leveraging technologies like Intel Geti and being able to have some of your use cases like the John Deere example become a reality?

Mario Lopez: Well, to be honest, for us it was key to do a partnership with Intel, and also in this case with TD SYNNEX. But regarding the solution that I was explaining before, one of the problems that I was mentioning that we were facing is the real-time process. And that’s something that it changed everything when we started to use Intel technology. Because, as I was saying, we were using before the cloud computing, but changing to the edge what we started to use is OpenVINO. And OpenVINO is what allowed us to do the inference on CPU as Michael was explaining. And that’s something that, as I said, changed completely our solution, because with just a normal computer or just a normal laptop with an Intel CPU, we were able to do the inference and run the models and get the resource in real time in a very easy way.

And the other thing is with Intel Geti—you were explaining and Michael was explaining his own story with using Geti—and for us was completely the same, because one of the problems that we were facing with our customers is that if you want to train your own AI models you need, as I said at the beginning, you need a lot of data, but also you need some experts doing the labeling of the images and also some data scientists to prepare the AI models.

And that’s something that could be very complex. It will require time to do it; it will require some employees in your company with the right knowledge. And that’s something that sometimes is not very easy to get. Before using Geti, that’s something that we were offering to our customers: to do the labeling, to do the training of the models, and doing everything. But with Intel Geti it changed everything, because now with just a very simple knowledge of using Intel Geti and just doing some clicks—to me, it is something very similar to PowerPoint, because it’s just: move your mouse, select the images, put the label, and that’s everything.

And when you know how to use Intel Geti, you can prepare your own AI models and in just a few minutes or just a few hours you can get all the AI models ready to use. And that’s something that it changed completely also in our solution, because now we can provide the tools to our customers. We can offer training, for example, in how to use Intel Geti, and they will be able on their own to just prepare the AI models and integrate in the solution.

Because the other thing that we have done is that we prepare our solution. We created a pipeline that allows us to prepare the AI models in a very easy way and deploy it. And the customer, in just few minutes, as I said, they will have available the model to use it in their company. So, to conclude, for us the partnership between Bravent and Intel was perfect, because we had the right solution, but Intel provided us the right software and hardware to do a perfect solution.

Christina Cardoza: Yeah. It’s amazing to me to see how, with Intel Geti, somebody like Michael—who’s an engineer and is not a data scientist and hasn’t worked with AI—can start building AI models or start training different things. And even though it took almost two hours, some people in the past have taken years of training to be able to do this. And now with a couple of clicks and a little bit of experimenting it can be really easy to do. So I can’t wait to see how else democratizing AI—and with Intel Geti—how that’s going to change the way that the businesses deploy and build AI solutions.

So, Mario, is there anything else that you can talk about, about the importance of democratizing AI and how it’s really going to—because AI is not going anywhere—so how it’s really going to help bring some of these solutions and advancements and innovations into the future?

Mario Lopez: Yeah. No, as I said, everything has changed—I think that in the last one or two years—because now AI is much more accessible, is very easy to use. As I was explaining, with Geti, now our customers that are not data scientists, they are not probably an engineer, they just have some knowledge about how to use the computer. And here—probably the most important thing is the knowledge about their own company and their own business scenarios—they are able to train the AI models. And it is the same as I was explaining at the beginning with generative AI—now just using the chat window or just using our voice we can access to the information.

Here I think that the most important thing is that the machine or the process are able to understand what we want. That’s, for me, the key of AI, and at the end we are able to get the right answer to the questions that we are doing. So I think that everything is changing, and I’m sure that in the next coming months and next coming years it will change even more, because now we are seeing that the software is evolving a lot, the technology is evolving. Now we have more intelligent machines around us. So I think that the AI will be even more accessible. As I was explaining, my parents now are able to access, to use, the AI, and I didn’t imagine anything like that just two years ago. So what we are expecting for the next coming months or years I think that it will blow our mind.

Christina Cardoza: Yeah, absolutely. My parents can’t even send an email, but now they can write their emails using AI. So it is amazing to see. I can’t wait to see what else Bravent and TD SYNNEX do in this space.

Before we go, I just want to throw it back to each of you, if you have any final thoughts or key takeaways of where you think this space is going, the importance of democratizing AI, and just collaborating with partners like yourself. So, Michael, I’ll start with you.

Michael Nelson: AI is definitely not going away. We have these little buzzwords that come through our industry: big data, and then IoT, and right now it’s definitely AI. I feel like AI is going to be here for more than just the usual sales cycle. It’s really in its infancy, and it’s very exciting to see how it’s going. Intel’s approach—again, with that “write once, run everywhere,” their AI-edge suite of software, it’s amazing.

Between the Geti to create the model, you deploy it to OpenVINO, and now they have another product we’re bringing to market called Scenescape that taps into cameras, uses that model, and then it can interact with—you know it can send messages, it can interact through MQTT protocols to other devices. It’s pretty amazing. It’s making it very actionable. So it’s not just the model running and giving us clever answers like you see with all these large language models; it’s real-world solutions to solve real problems.

Christina Cardoza: Yeah, absolutely. Totally agree. It’s another buzzword, but it’s not just another buzzword.

Michael Nelson: No, it’s just it’s so applicable in so many places. I mean, there’s not just one place we’re going to deploy AI; it’s going to be everywhere. And that’s not hyperbole; it’s literally going to be everywhere.

Christina Cardoza: It’s going to change the way we work and move for real. So, Mario, what about you? Is there anything else you want to leave our listeners with today before we go?

Mario Lopez: Yeah. No, Christina; I think that AI, in my opinion, is starting to be a commodity, because now everybody can access AI in every company. And as we were explaining, it’s more accessible. So I think that it will be changing in the next coming years even more. And I think that right now we have AI everywhere—probably we don’t know, but AI is everywhere, because in our cars, in our computers, in our TVs, in all the devices that probably we use every day, we have AI.

But for me, I think that it will be a commodity. All the companies are including AI in their process. And it’s going to be even easier, because now probably we are seeing that medium, medium-large companies are investing a lot of money in AI. But what they think from my point of view is that, in the next coming years, very small companies or even a self-employee is able to access AI, for example, just using Copilot or just using ChatGPT.

And that’s something that will change everything and will improve all the process that we have in our companies. To be honest, I think that that’s something that will be another revolution. We are seeing that everything is changing. We need to prepare our work, we need to prepare our life, because we are just seeing the beginning of a new era, a new revolution of the technology, and also in the companies.

Christina Cardoza: Absolutely. And the fact that we have AI everywhere and we may not even notice where it is or that we’re using it, I think is a testament to working with partners like TD SYNNEX and Bravent that businesses are able to seamlessly integrate this and transform our lives in meaningful and impactful ways, but also in ways that aren’t intrusive. So I think that that’s very important.

And just want to thank you guys again for the insightful conversation. I invite all of our listeners to visit the TD SYNNEX and Bravent websites, see how you can partner with them and how you can make some of your AI dreams a reality. So, thank you guys again, and thank you to our listeners for tuning in. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

EW 2024 Puts a Spotlight on Hybrid AI in Manufacturing

Software-defined manufacturing (SDM) is a product of Industry 4.0 concepts that have been talked about for more than a decade. The idea is to make factories more like data centers, allowing operational technology (OT) operations to be orchestrated much like modern cloud workloads that can be routed to, and executed on, any node with the resources to perform the task at hand.

SDM was especially top-of-mind during the embedded world 2024 Exhibition & Conference, which kicked off with a Day 0 networking event featuring a conversation about AI and manufacturing with Christine Boles, Vice President of the Network & Edge Group at Intel and General Manager for Federal & Industrial Solutions at Intel. She reflected on the past couple of years, from when software-defined manufacturing was in its infancy to its current evolution, where it is now more of a reality thanks to ecosystem partners and industry groups like the Open Process Automation Forum (OPAF) and the PCI Industrial Computer Manufacturers Group (PICMG), all of which are working to lay the foundation and future for SDM—enabled by workload consolidation, virtual programmable logic controllers (vPLCs), time-sensitive networking (TSN), and “AI Everywhere.”

“AI is driving momentum and it’s helping enable software-defined manufacturing,” Boles said. “Because of how fast the technology moves, it’s important that we, as an industry, work to make adoption, deployment, and management as easy as possible. That way, more manufacturers will be able to take the first step.”

Hybrid AI Lays the Groundwork for Software-Defined Manufacturing

From computer vision to real-time networking, there was a particular focus on enabling the intelligent edge enroute to an SDM paradigm, specifically around the use of edge AI in manufacturing, with exhibitors demonstrating how industrial operators can leverage it to:

  • Optimize productivity through real-time adjustments to production lines.
  • Control costs and increase efficiencies with predictive maintenance that inspects products for defects and performs root cause analysis.
  • Secure increasingly connected automation infrastructure by providing extra layers of protection in platforms like intrusion detection and prevention systems.

Edge AI is particularly well suited to these tasks because the proximity to data sources permits faster decision-making and reduces the networking costs of sending information to the cloud. But for the most part, industrial edge AI will demand revamped compute and networking infrastructure, Boles explained.

She shared that she and her team at Intel have been working with manufacturers to better understand their requirements. What they learned is “legacy infrastructure that is still very fixed function, with very little flexibility” and is “a sticking point for new workers and a barrier to implementing technologies like AI.”

As a result, “a hybrid AI approach is necessary where manufacturing is concerned—one that combines cloud software with edge AI,” she said.

Hybrid AI is a distributed method of operating on AI workloads that simultaneously delivers the real-time advantages of edge AI with the deep performance and comprehensive insights of the cloud. It addresses current obstacles to industrial edge AI by allowing operators to make use of cloud-based AI services today while their operational technology infrastructure is incrementally upgraded with AI-enabled platforms. Once fully deployed, these architectures enable the type of layered intelligence required for true SDM.

And “upgraded” is a relative term. “There’s a general assumption that if you use AI, you need a GPU, but you don’t,” Boles said. “What it really comes down to is knowing what kind of function you’re trying to do, and recognizing that when you start getting into heavier vision-based data streams, you’ll need higher-performance compute and potentially a different type of acceleration.”

Partner Collaboration Drives Innovation in a Software-Defined Industry

Boles concluded that ultimately partnerships enable technological advancements and innovation.

“We continue to work with our ecosystem of partners to bring solutions that really help change and transform the industry,” she explained.

For instance, Boles highlighted NEXCOM and the NexAIoT team, providers of industrial and manufacturing solutions, which deployed an autonomous mobile robot built on Intel® Core processors, Intel® RealSense cameras, and OpenVINO. Solomon Technology Corporation, which specializes in 3D vision systems, built an automation defect classification solution leveraging OpenVINO and the Intel® Edge AI Box. And Chieftek Precision, a manufacturer of high-quality linear motion robotics and robotic controllers, which developed an AI-powered miniature robotic arm tailored to high-precision manufacturing applications, thanks to Intel processors, Intel® Edge Controls, OpenVINO, and Intel vPro® technologies.

Moving forward, Intel will continue to provide more opportunities to collaborate with ecosystem partners, accelerate progress within the manufacturing community, and support end users in this space. With programs like the recently announced Intel® Industry Solution Builders, Intel empowers industry specific communities for both Intel® Partner Alliance partners as well as end users in their adoption journey, providing access to resources, training, and engagement opportunities across industries.

“Intel has always worked with our ecosystem partners to bring solutions and solve industry challenges,” Boles added. “For Intel, our partner ecosystem is both our main go-to-market and our primary path to innovation. That’s as true in manufacturing as in any other sector.”

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

AI-Assisted Cancer Detection Speeds Diagnosis

The most transformative innovations often emerge during times of crisis. That was the case for Javier García López and his cofounders at Sycai Technologies, a Barcelona-based digital health company.

The company launched in February 2020—one month before the pandemic—intending to create a marketplace where users would download AI-trained models created for their own relevant use case. But the pandemic created a more urgent need for health systems and providers to leverage technology to enable their work. As a result, the global health crisis unwittingly gave Sycai a powerful medical need for its solution and opened a pathway for the company to explore partnerships with hospitals to test its application.

During this time, Sycai also discovered an even more urgent use for its technology related to pancreatic diseases. Today, its original application has evolved into Sycai Medical, an AI assistant that uses machine learning and neural networks to empower radiologists to more accurately detect and diagnose pancreatic cancer.

Confronting a Silent Pandemic

Pancreatic cancer is usually diagnosed at a late stage and has one of the lowest 5-year survival rates among cancers. But Sycai Medical is harnessing AI to address this challenge, with a solution that can detect precancerous lesions in the upper abdomen much earlier—and improve cancer care if the disease is diagnosed.

“It was something that everyone told us is really dangerous. It’s like a silent pandemic. Up to one fourth of the population has these kinds of lesions, but they’re never detected on time because they have no prior symptoms. So, we thought we could have a chance if we were to focus there,” says García López, Sycai’s chief technology officer and cofounder.

García López, CTO, founded Sycai Technologies along with Sara Toledano, the company’s CEO. They then met their third cofounder Júlia Rodríguez Comas, who now serves as Sycai’s chief scientific officer. Comas, a scientist and researcher, has a Ph.D. in biomedicine and specializes in the pancreas. Her clinical knowledge propelled the team to focus on the pancreas and address this longstanding challenge in the medical field.

Within radiology, AI typically is applied to brain, lung, and breast conditions, García López says. The pancreas largely has been uncharted territory, but Sycai Medical may change this. With the help of application programming interfaces (APIs), the solution easily integrates into a hospital’s existing medical imaging system.

Sycai Medical reprocesses and analyzes a patient’s scan and then normalizes the image, so all the organs are equally visible on the scan. Next, neural networks (AI models) trained on anonymized data from thousands of patients with lesions in the upper abdomen pinpoint the exact location of the pancreas within the abdomen. Once AI identifies the pancreas’ location, it determines whether a lesion is present, and if so, whether its composition and characteristics indicate it is cancerous, precancerous, or benign.

“It extracts multiple parameters that if you match them with the clinical guidelines, it finally gives you what is the malignancy potential of these lesions,” García López says.

#AI works quietly in the background to surface this valuable information—without interrupting radiologists’ typical #workflow. @SycaiT via @insightdottech

AI works quietly in the background to surface this valuable information—without interrupting radiologists’ typical workflow. Sycai Medical complements their work without making a final clinical judgment for them. García López says the tool acts as a diagnostic assistant, warning doctors that it has found something on the scan that could be dangerous to the patient. Doctors can choose to open the alert and investigate further at their own discretion. The GDPR-compliant solution also doesn’t capture any metadata that could identify patients and is designed to ensure there are no memory or data leaks once it integrates with a hospital’s IT system, even in case of a server attack.

Bringing AI-Assisted Cancer Detection to More Hospitals

Sycai Medical accelerates AI-assisted cancer detection using a range of technologies, including the Intel® OpenVINO toolkit, open-source software that deploys and optimizes the performance of AI models.

With OpenVINO, the software’s AI models have been able to diagnose a lesion’s potential malignancy 70% faster with less than a 3% impact on diagnostic accuracy. “We were still over 90% accurate with 70% less inference time,” García López says.

Sycai Medical is a powerful tool for accurate early detection of pancreatic cancer, but it also could prevent unnecessary biopsies if a lesion is benign, and optimize care management if a patient is diagnosed with a disease.

The company conducted clinical pilots of Sycai Medical with hospitals in Spain and Germany. It is now going through the regulatory process, where regulators will audit its previous clinical trials. The company plans to launch in Europe this year, with a focus on detecting and diagnosing pancreatic cystic lesions. The solution also has future implications for the early detection of other pathologies, such as liver and kidney disease, with hospitals testing this use case as well, García López says.

Healthcare providers in the U.S. may soon have access to the tool. Sycai Medical is currently undergoing a 6-month pilot test at the University of Alabama in advance of potential FDA approval.

The Sycai solution showcases the transformative power of AI and its role in supporting better healthcare outcomes. By harnessing AI to improve cancer detection, Sycai Medical is delivering insights that could innovate cancer treatment and care, empowering healthcare providers to diagnose diseases faster and more accurately—and potentially save many more lives.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

embedded world 2024 Highlights Growing Demand for Edge AI

The embedded world Exhibition and conference is known for showcasing the most impressive advances in embedded systems and technologies. And embedded world 2024, which took place from April 9 to 11 in Nuremberg, Germany, was no exception. Intel and Intel® Partner Alliance members showcased the latest innovations in the embedded space, from edge AI to software-defined manufacturing, and everything in between.

Next-Generation Edge Processors Launch at EW 24

The event specifically highlighted a growing interest in edge AI, which demands a mix of performance, efficiency, and workload-optimized compute. Intel addressed all three dimensions at the conference with a series of processor launches that included the new Intel® Core and Intel® Core Ultra processors, Intel® Arc GPU, and Intel® Atom x7000RE processors.

The new Intel Core and Intel Core Ultra processors were built from the ground up for edge AI workloads. They feature a hybrid architecture consisting of AI Boost NPUs, integrated Intel Arc GPUs, and performance and efficiency CPU cores that help match the compute type to the workload at hand. Avnet Embedded, an embedded compute and software solutions provider, put these new processors to the test in a vision processing and motion control demo at embedded world.

The benchmarks speak for themselves. The new Intel Core processors deliver 2.57x faster graphics performance than the previous generation. Meanwhile, Intel Core Ultra processors for Edge deliver more than 5x image classification performance compared to previous-generation Core desktop processors. TQ-Systems, a technological service provider and electronics specialist, released an edge AI computing platform based on COM-HPC Mini and the new Intel Core Ultra processor at embedded world, demoing high performance for demanding AI workloads.

For more demanding AI and graphics use cases, the Intel Arc GPUs for the Edge bring 2.4x Resnet50 inference and 2.28x H.264 video decode compared to the leading competitive architecture. The new edge GPUs pair with a comprehensive development stack that includes the Intel® Distribution of OpenVINO toolkit to make programming and deploying software on the advanced Intel® Xe-core architecture a breeze.

But one of the biggest announcements from the event came in the smallest package. Intel Atom® processors x7000RE Series processors provide major upgrades to the embedded community, including 2x the cores, 2x the graphic base frequency, and a refreshed design for deep learning inferencing at the industrial edge.

To demonstrate the latest capabilities and benefits, congatec, a global leader in embedded systems, released the conga SA8, an industrial-grade SMARC module that fills a credit card-sized form factor. It features native support for WiFi 6E, enabling the module to handle TSN-over-WiFi (Video 1).

Video 1. congatec discusses meeting customer demands for edge computing and real-time capabilities at EW24. (Source: insight.tech)

Supermicro, an IT solution provider specializing in AI, cloud, storage, 5G, and edge technologies, also revealed at the event that its edge compute portfolio now incorporates support for the Intel Atom x7000 RE processors. This integration delivers the necessary performance and power-efficiency required for intelligent edge applications. For example, the company’s SYS-E100-14AM and SYS-E102-14AM IoT edge servers bring high-efficiency AI and edge computing in an ultra-compact, fanless form factor.

Elsewhere on the show floor, computer manufacturing company AAEON showcased the ‘E’ variant of the Intel Atom x7000 series released earlier this year. In a demonstration of performance and power efficiency, a demo of the PICO-ADN4 PICO-ITX board ran real-time traffic analysis on four CPU cores at just a 12W TDP.

This is just the beginning for the Intel Atom x7000RE, which is expected to be deployed in similar edge AI applications that require time-critical connectivity in the future.

Unlocking Hybrid AI’s Potential

With much of the focus on achieving the benefits of edge AI, hybrid AI architectures stood out as another highlight of the show. Hybrid AI enables real-time AI processing on edge devices while also leveraging the cloud for deeper data analysis.

Key capabilities of this architecture include:

  • Leveraging AI for real-time use cases like product inspection and defect detection.
  • Identifying relevant data (like images of defective parts) for transmission to the cloud, thereby reducing networking costs compared to sending entire data sets from the edge.
  • Additional layers of depth with cloud-based inference on more powerful data center hardware offer additional layers of depth.
  • Ability to train cloud-based models on more granular and controlled data sets, filtered by edge inferencing operations.

At the show, Intel and its partners demonstrated their leadership in developing full-spectrum hybrid AI solutions. For example, Supermicro showcased how its platforms can support both edge and cloud functionalities, enabling seamless data flow and intelligent processing across both environments.

Additionally, partners like graphics card market leader Sparkle Computer Co. LTD demoed at the show. And component-level solutions developer Matrox displayed how it uses the latest Intel Arc GPU to enhance AI performance (Video 2). These latest GPUs, which were also announced at the event, are designed to handle large data sets and complex algorithms—making them ideal for scenarios where edge devices collect vast amounts of data that need quick processing before being sent to the cloud.

Video 2. Matrox displays the power of intelligent video processing with Intel Arc GPUs at embedded world 2024. (Source: insight.tech)

The Software-Defined Manufacturing Revolution

The show also emphasized the ongoing shift toward software-defined manufacturing. This paradigm focuses on using software to control and automate manufacturing processes, making them more flexible, efficient, and adaptable to changing market demands.

Significant contributions to this revolution have come from industry leaders such as ExxonMobil and Schneider Electric. These companies have played pivotal roles in development and promotion of the Open Process Automation Standard (OPAS). OPAS is designed to create open and interoperable systems in manufacturing, breaking down traditional barriers and fostering innovation through greater flexibility and reduced reliance on proprietary solutions.

Furthermore, standards like OPAS and the newly introduced InterEdge are critical in shaping the future of manufacturing. InterEdge, for example, supports integration of edge computing into industrial settings, enabling more localized data processing and immediate response capabilities.

Together, these standards lay the groundwork for a new era of manufacturing governed by software capabilities, leading to improved operational efficiency, reduced costs, and enhanced production quality. These developments mark a significant step forward in the evolution of industrial manufacturing, pointing toward a future where flexibility and efficiency are paramount.

An Exciting Year Ahead

Developments at embedded world 2024 suggest that this will be a year of growth and transformation in the industry. As AI becomes more deeply ingrained and industrial processes are increasingly directed by software, opportunities for innovation and progress are set to increase.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Video Intelligence Illuminates Path to Pedestrian Safety

Keeping everyone safe, whether on busy city streets or highways, is no easy task. Sometimes sudden movements like an animal crossing, or a small child running, or a pedestrian crossing, can happen anytime. Bicycles and scooters present even more risks. Signs that warn of road hazards or crosswalks are helpful but limited. Today’s city and highway planners increasingly turn to intelligent transportation systems (ITS) and other smart city technologies to address these challenges.

Monitoring systems with IP cameras and AI analytics can capture real-time data showing movement on roadways to trigger actions for safety purposes. One solution—SecurOS® Soffit from Intelligent Security Systems (ISS), a developer of video intelligence and data awarness solutions—essentially escorts pedestrians by illuminating each crossing section and alerting motorists in plenty of time to prevent an accident.

“Cities continue to rapidly evolve and shape around a multimodal environment,” says Joe Harvey, Intelligent Transportation Systems Market Sector Leader for ISS. “There are so many modes of transportation on our roadways—pedestrians, cyclists, buses, trucks, cars, and more.”

The solution uses new or existing security cameras to capture images, video analytics to process the images, and dynamic LED modules to act on the images by casting light on crosswalks. Harvey calls it a high-tech improvement on road signage. Unlike signs, he notes, it doesn’t require drivers to take their eyes off the road.

The solution also has a long-term purpose: With its monitoring and AI analytics capabilities, SecurOS® Soffit allows cities to capture data to make long-term safety improvements such as optimizing traffic patterns and roadway designs. The company has a long history of analyzing video data and streams—developing experience and expertise in deploying video analytics since 1996. This expertise is what keeps ISS ahead of the competition by deploying the latest AI and computer vision technology.

Ultimately, the goal of deploying a system such as the Soffit is to reduce roadway incidents and eliminate traffic fatalities and severe injuries, while promoting safe mobility.

Monitoring systems with IP cameras and #AI analytics can capture real-time #data showing movement on roadways to trigger actions for safety purposes. @Isscctv via @insightdottech

The Technology Behind Intelligent Transportation Systems

Soffit leverages IP cameras strategically positioned at crosswalks. The cameras transmit video data to an analytics controller, which activates an LED lighting module when pedestrians enter a crosswalk. “The driver is alerted to where the pedestrians are, and as they travel throughout that entire crossing,” Harvey says. When pedestrians clear the crosswalk, the lights return to a regular static mode.

The solution captures different types of vehicles moving at different speeds. This is important because a bicycle at a stoplight takes more time to clear an intersection than a car. Soffit can make the necessary adjustment, allowing the cyclist more time to cross, much the same way that pedestrian buttons at conventional stoplights extend crossing times.

In time, as connected vehicle systems gain functionality, automobiles will start receiving data from systems such as Soffit so they can autonomously adjust to intersection and road conditions, Harvey says, adding, “This will continually push the industry into instantaneous real-time decision-making.”

ISS leverages Intel technology for the monitoring and analytics solution. As Intel launches next-generation processor technology, companies like ISS gain advantage with enhanced functionality and scalability. For instance, the company’s access to the latest Intel® Core Ultra processors provided the ability to test and build the latest technology into SecurOS solutions—providing customers the benefits of next-generation technology right away with more power and performance. “With the Intel Core Ultra processor and its built-in AI acceleration, we saw an enhancement of 75% and 100% in video analytic workload over the previous generation.”

ITS Offers Safety for Multple Use Cases

ISS has deployed Soffit in various settings. One of the largest is in Mexico City, where the solution processes images from 65,000 smart city cameras. City authorities use the images for a range of use cases, making on-the-spot decisions such as rerouting traffic when needed and deploying assets to assist disabled vehicles.

At a university in Florida, Soffit helps keep students safe as they move around the campus. Every hour or two there may be thousands of people leaving classrooms, going to cars, walking back to the dorms, and other destinations. The system collects and analyzes data from these areas where cars and pedestrians share busy roadways to provide insights on how to adjust traffic patterns and crossings to prevent accidents.

Soffit is also in use at an auto manufacturer’s campus that changed pedestrian crossings for workers exiting a building. That change created a hazard because drivers weren’t used to the new pattern. Now, with dynamic illumination in place, “you can see across a large area where these people are coming from and where they’re headed to,” Harvey says.

Looking ahead, Harvey envisions many more uses for the technology. As camera deployments continue to increase, no human can possibly keep an eye on all captured video, so AI will play an essential role, providing insights that trigger immediate action when needed and long-term wisdom to ultimately enhance and protect life in a connected world.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

MWC 2024: The Rise of AI and the Future of Networks

Mobile World Congress 2024 unveiled game-changing innovations that will redefine how we connect.  Think private mobile networks and AI that learns to optimize performance!

In this podcast, we unpack these exciting trends and explore the future of intelligent networks.  Join us to unlock the potential of tomorrow’s technology and embrace a world of possibilities.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guests: CCS Insight and Intel

Our guests this episode are Ian Fogg, Director of Networks at CCS Insight, and Wei Yeang Toh, General Manager of the Ecosystem Development Organization at Intel.

At CCS Insight, Ian focuses on network innovations, examining virtualization, Open RAN developments, and private mobile networks.

At Intel, Wei oversees ecosystem developments for 5G and edge computing, collaborating with the ecosystem to adopt new technologies and overcome challenges.

Podcast Topics

Ian and Wei answer our questions about:

  • 3:03 – Key network trends from Mobile World Congress
  • 8:25 – Advances and benefits AI brings to the network space
  • 15:06 – How these key trends and themes impact the industry
  • 21:23 – Importance of the hybrid model for intelligent networks
  • 24:14 – What’s on the horizon for 6G networks
  • 26:11 – Ecosystem partnerships for network modernization

Related Content

To learn more about key network trends, read Private Mobile Networks: Options for Scaling the Market and MWC 2024: Private 5G Networks Take Center Stage. For the latest innovations from CCS Insight, follow them on X/Twitter at @ccsinsight and on LinkedIn. For the latest innovations from Intel, follow them on X/Twitter at @Intel and on LinkedIn.

Transcript

Christina Cardoza: Hello, and welcome to the IoT Chat ,where we explore the latest technology trends and innovations. I’m your host, Christina Cardoza, Editorial Director of insight.tech. And today we’re going to be looking at the network landscape with Ian Fogg from CCS Insight and Wei Yeang Toh from Intel. But, as always, before we get started, let’s get to know our guests a bit more. Wei, I’ll start with you. What can you tell us about what you do at Intel?

Wei Yeang Toh: Hey, glad to be here. So, yeah, thanks for all the intros. Hey, my name is Wei Yeang. I run the ecosystem developments for 5G and edge computing for Intel. So, this function resides within the network and edge solution groups. The challenge is really working with the ecosystem, developing the market, making sure that the long tiers of the value chain will come together, working together to accomplish a common goal to address the challenges that the end user is looking for. And this cuts across a pretty broad market, right? So, we do need a broad set of ecosystem partnerships to cultivate these solutions driving towards a form of maturity and, at the end of the day, to solve our customer problem.

Christina Cardoza: Awesome. Looking forward to getting into that a little bit. At insight.tech, the articles that we’re writing, there’s been this ongoing theme we’re seeing, “better together.” So, excited to get into that ecosystem a little bit more.

But before we get there, Ian, welcome back to the podcast. For anybody who hasn’t seen the recent episode that Ian did on predictions for the next year and beyond for the network landscape, which is—some of them we’ll get into probably today. But, Ian, what can you tell us about yourself?

Ian Fogg: So, I’m a Research Director at a company called CCS Insights. We are an advisory and research company. We track global trends in networks and a whole lot of other areas, including the circular economy on handsets—in terms of media, in terms of enterprise research. My coverage area is around what we call network innovation—that’s what the practice is that I lead.

So, a lot of focus at the moment around things like virtualization, the Open RAN developments, private mobile networks. There’s obviously still interest in things like non-terrestrial networks. And one of the big trends at the moment is around how AI is transforming different parts of the network and driving greater need to use cloud services within the telecom operator.

Christina Cardoza: Yeah. “AI everywhere” seems to be the big theme of 2024, especially in the network space. And since we’re talking about the network landscape and the different trends and predictions that we have, the Mobile World Congress event recently just ended, and there were a lot of themes going around private 5G—a lot of network things happening in that space.

So, Wei, I want to start the conversation there, since both you and Ian were at the event. What were you hearing on the show floor? What were the trends and the themes that you’ve observed? And how do you think those are going to push the industry or this space forward over the next couple of years?

Wei Yeang Toh: Yeah, certainly. I’m pleased to see how things evolve, in fact, every year. Having attended Mobile World Congress for close to about a decade by now, things have been progressing. And there are a few pretty clear, noticeable key themes from this year that we have to discuss.

Number one, of course, AI is everywhere, right? Not only on the show floor, but as well in any of the customer meetings. Towards the end there’s always a question from the customer, “Tell me more about your AI strategy.” It’s kind of funny, getting ready for that question: “Oh, you’re not going to ask for AI?” So, yeah, AI is everywhere, and we see this is the beginning, right? The ecosystem is exploring how to leverage AI for different types of usage in the telco domain—because we’re talking about Mobile World Congress over here, right? So we’re talking about things like leveraging AI in the areas like network optimization, predictive maintenance, customer service, right? And many others, the possible use cases. And, again, it is at the beginning of looking at the form of AI-adoption journey for the telco community.

But then there are a few more topics that with dimensions, that surface up during these shows. And, again, it’s a progressive update, and telco API is another key topic that surfaces up—quite, quite big highlights during the show as well, right? And this is all about—hey, we all, as telco communities here, managed to accomplish quite a lot in 5G deployment, and then what next? And it’s really about—hey, CapEx is already invested, how do we speed up authority and monetization through edge application? And therefore telco API brings a set of standardizations, if you will, right? And it’s helping the community to look at how to capitalize any form of the emerging opportunity across multiple industries.

We really talk about cross-vertical industry over here, and telco API will open up a new era of how connectivity, how edge computing come together and are able to create a form of revenue generation—not only for telco, but at the same time for the edge and vertical-ecosystem partners to come together to utilize the API for a better services creation, better customer experience, and so on.

And then the third thing I want to mention real quick is the entire software-defined network transition is carry on, right? And we see vRAN (virtual RAN), Open RAN as a continuous—as the next major milestone to go and accomplish. We have done network-function virtualization for call networks, for OSS/BSS, and so on—the journey continues. And vRAN continues to make progress, and we see how much that we manage to accomplish together, staying up to date right now with our partners and telco operators, and the commitments remain there to modernize the infrastructure, to software-define the network infra. And through that we’re able to unlock the infrastructure constraint and get into truly cloud-native in the future. And AI will be there meeting at the junction to unlock the future’s capability.

And then lastly I’m excited about how the partners are talking more about collaboration. It’s all about synergizing the ecosystem as a catalyst to drive innovation and grow, right? And this is a very clear path always to accomplish success, because in this world right now—when we bring 5G, edge computing, AI—it is, like, across three domains, right? And we lived through the first—I shouldn’t say first—we lived through the past AI transformation for NFV, and that involved cloud layer, IT layer, and so on. And now we’re getting into more complex IT, CT, OT, AI—all the cross-domains. And this will require very strong ecosystem collaboration to get that.

So we are excited. We see the company reaching out. They would like to formulate a strategy collaboration towards a common goal. And it’s a very healthy sign, showing that the ecosystem partnership—how it should look like, towards a very clear, common goal.

Christina Cardoza: All exciting stuff. One thing that I love that you mentioned is this isn’t only happening in the telco space. This is impacting across the main, across vertical, across industry, and also bringing improvements to edge computing. I think that’s really important that a lot of these advancements, they’re not happening in silos. It’s really having a huge impact in all areas.

I want to go back to what you were talking about in the beginning of your response about AI. We opened up that AI is a theme going on everywhere. So I’m curious, how is AI coming to the networking space? We’ve written about on insight.tech using AI in manufacturing for predictive maintenance, like you mentioned. But what are the advances that AI can bring to networking? And what did you see at the show within the Intel ecosystem? How were they showcasing some of their AI advancements?

Wei Yeang Toh: Yeah, yeah. The whole show is a big highlight, again, for sure about AI. And telco is at a different stage of the AI adoption journey. And what I mean is depending at which stage of this network modernization and monetization as well, the different telcos, they’re really at different stages right now. But regardless, what we are going to see across various of the objectives and intentions, it is still within the same objective of how do we harvest the best outcome of the AI, right?

And we are talking about whether it is to orchestrate the intelligent network, or do we look at gaining more insight about the network? Or creating a new business opportunity? So most of the conversations that we run into, I could structure it down to probably threefold, to make it slightly simple to follow. Telco, they’re planning the AI-adoption journey based on maturity, based on the KPI they intend to accomplish, because any form of investment, it costs CapEx; it costs an OpEx. And there isn’t a lot of extra, excess, CapEx to spend with a lot of CapEx already invested in 5G. So therefore every single step will have to be well planned with a KPI in mind and the maturity of the certain use cases to get into production. All those in mind, right?

So we see three major areas that telco is looking at. One: across the board we see a lot of the discussion and showcasing around inserting AI into the network layer, right? And we’re talking about use cases like vRAN with AI coming in to help things like power management, wind farming, antenna selections, channel estimation, and so on. There are tons of other opportunities you could look at, but it is about making vRAN, TCO, look better compared with traditional RAN, right?

And Intel itself, we do take this opportunity to introduce the vRAN toolkits, AI toolkits that we’ve been working on for really a while. It’s great timing that we release it, announce it. It is, again, all about helping partners who already have the strong Intel install base with Intel vRAN platform, and helping partners to unlock the AI capability within the same platforms that they’re already using. Again, it’s a journey, right? So adopting AI, it will be a journey, don’t rush it. Look at the KPI and adopt based on the maturity. So we are doing that. That’s one—inserting the AI into the network layer.

And then we see the second thing is GenAI. Because of ChatGPT and so on, GenAI has become a big topic. But for a player that’s working on AI for a long time, AI is not all about GenAI, right? GenAI is one form. Yes, it’s very much needed technology; it could do good. But GenAI, it’s not the only part about AI. GenAI will bring benefit for different parts of the telco network, and if there’s a set of KPIs defined, the implementation could be done in more a cost effective way by enabling the GenAI capability at the right location and in a more sustainable way, because it will require their power consumption as well, right?

So we have a lot of conversations with the customer around, “How are you going to activate GenAI?” Is it all about consolidating, concentrating the centralized computing for GenAI? Or taking advantage of what we have learned through OpenAI right now, and being able to select the right language models and fit that into the location to do the job that’s required for what you need, rather than get into a big centralized AI, and get to know what to get out of it. It’s going to cost investment. So GenAI, we have a lot of discussions—how to get there, don’t rush. How to get there—we’re here to help you to unlock GenAI by phases.

And then the third piece of it is something not too new but it gets blended into cross-domain discussion, which is this telco-edge use case that talks about API and so on early on, as the deployment of 5G will need each use case to come along, we see telco is embracing itself as a channel and as a platform, telco as a platform to deliver the services to the vertical customer, enterprise customer, and all forms of service will require AI capability as well, right? And, again, AI capability will involve things like computer vision. We’re talking about a lot of cameras, installation out there, and all the video feed coming back. It will require a form of computer vision to do video analytics and so on. And telco is, they’re the data-network pipe.

In fact, video is some of the biggest traffic occupying the pipe, so telco has the right infrastructure to go and not only help the deployment but be able to look at a way to monetize it through the data analytics and so on. And then, plus GenAI. All in all, it becomes a more complete edge solution that require it. So, yeah, all those are three big areas: AI, inserting the networks, introducing GenAI within the telco different locations—new use cases, with different forms of AI, computer vision, machine learning, data analytics.

Christina Cardoza: Yeah. One thing that I love that you said was AI is a journey and not to rush and that this is still an early adoption for the telco space. I feel like we’ve been talking about AI so much and the benefits it can bring in all of these different areas, but it’s still early on in some instances, and not to rush that application of it, to be really strategic about how we want to involve it.

Ian, last time we spoke we were talking about how AI is coming to the network space, maybe bringing some self-healing capabilities to the networks. And you had some predictions there. You mentioned in the beginning that AI was actually one of the big themes you were seeing too. So, since you were at the event—curious from a research perspective and from CCS Insight—what themes and trends did you see? Do you want to touch on anything that Wei mentioned or add anything to that?

Ian Fogg: So, yeah. I mean I think Open Gateway was clearly one of the massive initiatives of the show, as Wei mentioned, particularly driven by the operator side. I think there was vendor support too, but really the operators were really the main drivers of that side of things.

AI was everywhere at the show. What struck me about many of the AI demos and stands was that not everything was new. A lot of the stuff I’d seen last year when there were demos on the stand, but AI wasn’t such a big thing. They didn’t have massive AI labels on it. So, for example, I saw a demo of RAN optimization orchestrating cells together to reduce energy uses in the RAN but still maintaining a good enough level of performance, and that demo I saw last year. This year I think it was actually a launch, and it had AI plastered all over it in big letters, but it was there last year, too.

So one of the things about AI is although it’s really high profile at the show this year, it’s built on kind of a long runway of foundations. This hasn’t happened overnight, it’s just that this year—because of what’s happened with ChatGPT and Anthropic and Cohere and Gemini and all the rest of it—it’s a lot more high profile than it was a year ago or two years ago.

There was also AI in different parts of the network. We toured the RAN-optimization piece. You know, you talk with the BSS/OSS people, and there are people there using AI tools for revenue optimization and maximizing revenue generation. There’s stuff happening in the operations domain; there’s stuff on the security domain.

One of the things that was new this year was GenAI, as Wei mentioned. And what is Generative AI? Well, descriptive API categorizing information like categorizing photos has been around for years. Generative AI doing things like large language models, creating photos, creating videos, creating fluid text interactive interfaces—that is a much newer trend. But where I saw the GenAI models being used was often to democratize knowledge. So it wasn’t doing ChatGPT and training on the whole of the internet; it was vendors taking GenAI models, training them on very defined data sets about, say, vendor tools or regulatory requirements, or whatever, and basically democratizing the information and making it something more accessible to more people than an organization.

So that particularly, that was happening in the security space, but not just that. It was also happening in the operations domain. It was happening in a whole load of different areas. And I think that was one of the really interesting things, was seeing that use of GenAI tools to democratize information.

I chaired a panel at the SecCon event at MWC this year. So, SecCon is a security event filled with CISOs. And AI was the key theme of that event-within-an-event. So it’s an event within MWC, and all the sessions were AI focused, and what was becoming very apparent was, from a security point of view, GenAI increases the velocity, the sophistication, and the quality of those security threats. Why does that matter on what we’re seeing on private mobile networks and IoT? Well, if you think about what are the main benefits of private mobile networks, it is that security element around it: that if you have a dedicated network, you have total control over how that behaves. If you go to a hybrid model, where you’re using the macro network as well, and you are tying that back into the enterprise security, again, a lot of the advantages—it’s not just about the performance and the reliability and the predictability; it’s about a security element too.

And you can see with those AI-based threats, security is becoming higher profile in the market. It’s also one of the other things we’re seeing in the private mobile network space; it isn’t just the growth of the hybrid model. It’s also the increasing use of 5G over older technologies for private mobile networks. And that’s important, because 5G is a more modern standard and it has more robust security than legacy mobile technology. So there’s a benefit there too, I think.

The other benefit of 5G is you can tap into things like REDCap, so, reduced capacity, lower-cost 5G devices. They’re still able to tap into the benefits of that 5G core network and are still able to use 5G-specific spectrum that isn’t available for 4G or 3G, but they’re much cheaper devices. And that’s something we can see coming down the pipe based on those 5G standalone rollouts, which, again, was one of the other things we saw momentum around at MWC this year, was this shift to a second wave of 5G—5G advanced, which requires a standalone network, not a non-standalone network. So it’s purely 5G. It’s not using the old 4G core network, what you have in a non-standalone world. You’re moving onto real 5G, complete 5G.

And I think that was one of the other trends we’re seeing. And that enables things like REDCap. And that will give us a greater momentum in having 5G IoT devices. 5G devices suitable for private 5G networks in all kinds of areas. You know, different form factors, different cost elements, different performance profiles. And that will cause an acceleration, I think, in the private 5G space.

Christina Cardoza: Great. You know, I’m not surprised that you mentioned at the event AI was obviously very prominent all over the show floor, but a lot of things that you were seeing were demos that you saw last year, or weren’t necessarily new things. I think this industry—you guys probably both have experienced it throughout your years in technology—we love our buzzwords, and AI is one of the biggest buzzwords right now. But I think what makes it different from all the other buzzwords we may have seen is just all the benefits you were just mentioning. It’s real, it’s not going anywhere, it’s more than just a buzzword. So, excited to see how it continues to progress. Like you said, it’s not happening overnight, so we’re going to continue to see more of these advancements and changes.

But talking out a little bit outside of AI, you mentioned a second wave of 5G and other things. I’m curious, because I know at the event, CCS Insight was also talking about the private mobile networks report that you guys recently put out, which is available on insight.tech. I’ll make sure to provide a link for any of our listeners who want to dig deeper into that. But what were some of the findings that came out of that report, and were you seeing any of those on the show floor actually in real life, in real time?

Ian Fogg: Yeah, sure. So, we were seeing that momentum around the hybrid model. That was very, very noticeable, I think, at the show. I think one of the other things that was striking at the show in the private mobile network space was we’ve still got a very large number of vendors in the private mobile network space. But there’s still consolidation happening, there’s pressures happening, and I think there’s a kind of momentum around bigger players in the space. I think that’s one of the other dynamics we’re seeing.

I don’t think it’s necessarily flowing through yet into the numbers behind the report, but I think it’s something that was very apparent at the show. And what’s happened just after the show is this shift of consolidation, this shift to greater scale in some of the vendors coming through. This hybrid model is very important, too, because historically, private networks were just dedicated. You put in your core, you’d put in your equipment, you’d have some spectrum, you’d have your devices connecting to it, and that’s what it would be. And it would be in a factory, on a port, on a logistics facility.

The hybrid model—what that does is potentially extend a lot of the benefits to private network onto a macro network. Now that could be dedicated spectrum, say 450 megahertz or something. Or it could be a network slice on the macro 5G network. Now, what a network slice is, is a way for an existing mobile operator or mobile carrier to have an end-to-end quality of service managed experience that’s segregated from other traffic on the network. So it has a security segregation as well as different quality-performance metrics. That’s a characteristic that’s possible with 5G once you have this 5G standalone network.

And as we’re seeing operators finally deploying standalone networks, finally having 5G calls, we see increasing opportunity for this hybrid model. Now, where that’s useful is, say, take that logistics situation for a second. You have your logistic hubs, you have maybe lorry drivers or couriers or something going outside of that—maybe you want them to stay connected to your network with many of the security benefits of that, but you can’t have your private dedicated network everywhere around the country.

So what you can do is have your dedicated network in the logistics hub in that facility, but when those transportation workers leave that they could be on a network slice on the macro network, on the main mobile-operators network infrastructure. And that’s something that we’re seeing that momentum around standalone. We’re seeing that momentum in our data already around the hybrid model alongside the dedicated model. It’s one of the big growth areas at the moment. And we can see these 5G technologies enabling that opportunity. So that will open up some different dynamics in the private mobile network space.

Christina Cardoza: So still no 6G yet out there.

Ian Fogg: Plenty of things happening on 6G at the moment. It’s just, you know, these things all happen in parallel. The 6G work is all happening behind the scenes. We had the WRC last autumn talking about spectrum usage of 6G. The R & D guys are all working on it. I think one of the things that’s interesting about the 6G discussion—which is relevant back to this—is a lot of the efforts are to have a 6G standard that is simpler than 5G. Because one of the things that I think everyone’s noticed on the vendor community around 5G is that there has been this non-standalone-access rollout of 5G, which is sort of—it’s kind of hybrid 5G and 4G, but it’s being called 5G by all the operators. And a lot of complexity around that, and it’s slowed down the availability of real 5G features where you need that standalone experience.

So one of the things that that’s hitting around when you talk to people about 6G is a general consensus to keep 6G simple, and to only have a standalone version of 6G. Because I think everyone in the industry in the technology space has been frustrated by how long non-standalone access hung around on 5G, and probably the damage it’s done to people’s perceptions of what 5G technology can do. So the focus on 6G is we’re just going to do standalone.

Christina Cardoza: Great, yeah, I agree. I think with all these technologies and advancements coming out everybody sees how it can make their lives a little bit more simple. So that is going to be interesting how it goes and impacts some of the technologies and standards that are going to come out to make it more simple, to make it easier for people to adopt or to access.

I want to change the conversation a little bit. We’ve been talking about all the benefits that we can get from these core technologies and from the network going forward, but I think it’s interesting to talk about how we actually get to those benefits. We talked about the collaboration aspects, we’ve alluded a little bit to the Intel ecosystem. So, Wei, I am curious, how can companies partner together and partner with Intel to take advantage of some of these latest innovations and to really be able to get to network modernization, edge monetization, and these AI advancements we keep talking about?

Wei Yeang Toh: We believe it’s better together in the journey of creating the connected world—we need the different players across industries to come together. And this has been at the heart of our ecosystem program, like the network builders program that’s been running for close to a decade, as well as the newly introduced Intel solution builder just this week in software, doing embedded work. And the idea is really to bring the cross-domain ecosystem partners to come together, right? So, I think a couple examples.

For a network space, it’s a perfect opportunity for the cross-industry to look at a way to upscale themselves by working with their partners, and at the same time through the collaboration help to upscale their workforce to understand the cross-domain knowledge, right? Bring the technology in, adopt the technology, and be able to hire and retrain the workforce so that it will blend in the technology into the respective domains. And it will look and feel and emit the right KPI as well within their own domain. You cannot just look at the solutions, the cross-domain, try to take it and stab into it and make sure that it works. It might be the beginning, but across different phases you have to blend into your own needs.

So we see a couple things happening, and Intel’s been cultivating pushing this forward, supporting the ecosystem, making sure that it happens, right? I point back to what Ian said—earlier example for private 5G. This is how we see a pretty strong maturity of private 5G software. Our partners from US, from Europe, from India, from AsiaPac—they’re different pockets of partners, right? They come with a pretty wide selection. So this no longer is just big equipment vendors like classic Ericsson, Nokia, Juniper, and so on. The playing field, right? It is a market that, because the technologies of the entry barrier are lower, we see a lot of the innovators coming in, and over past few years we see this solution getting into a form of maturity.

So I’m amazed when I stopped by and with my partner we both looked at the out-of-the-box experience, the different capacity of the private 5G. It’s just getting better and better right now. You have this radio-distributed 5G call, network blending, right? And of course they were plus AI, as Ian mentioned. It looks the same as last year—now plus AI. But I was amazed by the maturity of a different set of solutions that will help the respective vertical market player to lower down their barriers to entry because it speaks the same language. Some of the players, they’re more tuned towards one vertical than the other, and it’s really to help to unlock the adoption rate of private mobility as a form of connectivity, just like Wi-Fi over the password into the system itself.

So we see that happen; it’s coming together, right? This year is way more mature than last, and every year we’re just getting better and better. So we see that happen, and we need this side of the ecosystem to continue to come up with a solution to help the industry move forward. One example: we see ecosystem partners stepping in as well, stepping up and stepping in as well to offer, like, telco API gateway. So we have a couple partners coming up from Europe, US, and India—particularly three pockets of the country. They’re offering an interesting solution, it’s a form of API gateway.

And from the edge perspective, we see interesting things happening as well. We see some of the edgeverticals, their ecosystem—they participate at MWC. They’re not classic telco, but they come here because we see the trend of the edge or IoT verticals. They are enterprise verticals, they are at the middle of digitizing their solution, right? Software-defined. And by de-coupling the different layers of the solution, it is the same journey that we are going through with telco as well—how to put it back together, right? And by putting that together—when you take it out and put it back together—there’s opportunity to insert the right solution they want into the software-defined.

So we’re talking about, again, connectivity, right? Private 5G now has an opportunity to insert into the software-defined environment. We see security, right? Cross-domain collaboration—they’re reaching out to Intel, asking about the software-defined security solution that can insert into the stack, right? And then the players figure out how to integrate that all together, and it’s a better solution. It’s an evolved solution from the previous one. And we see those trends happening in retail, manufacturing, healthcare, media, entertainment, and so on. It’s exciting, right?

So, working with the right leader in the industry is important because the leader will bring you the rest of the ecosystem, right? And Intel is one of the leaders here. Yeah, we’re proud about it. And I would love to share more when partners, say, are getting into these journeys. And we always welcome more partners. Come and knock on the door: “Hey, we need help.”

Christina Cardoza: Yeah, it’s always so powerful to see that ecosystem and see how partners can work together and can work with Intel. We’ve been talking about a lot of different advancements and solutions going on in this space. And I think it’s clear that no one partner organization can be the expert in all of these different in innovations. So it’s great to see them leverage on Intel and leverage other partners to really connect the dots and to bring a bigger solution together and to market and to help with some of these advancements that we see going.

I know we are running out of time, but before we go I just want to hear from each of you again if there’s any final thoughts or key takeaways you want to leave our attendees with, or where you guys anticipate the next significant focus of the network space or the next challenge to be over the next couple of years. So, Ian, I’ll start with you.

Ian Fogg: Sure. So I think there’s a whole lot of things here. I think many of these buzzwords that we hear, all these technologies, we’re still really at the start. So, Open RAN, virtualization RAN, is still really quite early. ATT made that massive announcement in December. Vodafone in Europe has got some Open RAN stuff. In Japan it’s called Open RAN, but really it’s really right at the early stages of that. There’s a lot of runway ahead of us, a lot of opportunity for growth in that. As you virtualize the RAN, you alter the hardware infrastructure, you order the software play. The vendors can change. Lots of things happening there. AI is still very early in the RAN and in the core and everywhere else as well.

I think the takeaway I’d have on AI is that AI has fundamental benefits, which is why it’s been worked on for so long before this hype kind of rose. And I think the key thing I’d say there is that if and when—probably more like when—there is this collapse in perception of AI, don’t stop working on AI. It is something that’s going to be important, it’s going to stay important, it’s going to be fundamental to many different areas for the future. And I think that’s one of the big takeaways from this.

And then how do you choose to use AI in your networks or in your solutions is also important. It’s not always clear where the best way and how to best to apply it is. So it’s going to be around long term, even if sentiment moves against it—you know, the bubble collapses.

Christina Cardoza: I’m excited to see that evolution and how we’ll get there. And then what will be the next big thing we’ll be talking about, or how AI will sort of go behind the scenes. Wei, is there anything else that you wanted to add or any final thoughts or key takeaways you have for us?

Wei Yeang Toh: I’ll probably just hit three tricky points. I say a lot just now, right? And to Ian’s point, vRAN is making progress. Continues to—it will continue to make progress. And it’s a part of the network modernization that will happen. But then execute in a more sustainable way as well, because of power consumption—the carbon footprint will be still top in mind, in everyone’s head that we have to collectively make it happen, right? So, vRAN continues to make progress, but executes the network modernization in a more sustainable way, right? And therefore, whenever every step, plan it ahead in terms of adoption rate.

Second. Starting to unlock the 5G business value. I have a couple conversations with the telcos that some of telcos, they’ve been telling me that, “Hey, if we can’t unlock the 5G, there will not be 6G. It’s going to run out of cash, right?” So it is in all the best interests among the telco community to unlock the business values for 5G faster. And telco API, It represents the opportunity to speed it up in terms of the path to monetization. So, start with that, right?

And then the third thing is the telco AI-adoption journey. The reason I use the words “telco AI-adoption journey” is because it is a journey, right? And it is a journey that is helping telco in becoming a techco, right? Telco has been talking about it for quite a while, becoming a techco by combining the network modernization, monetization, and AI—it will help the telcos that are transforming, becoming techco, and it’s going to be an adoption journey.

Christina Cardoza: Great. Well, I want to thank you both again for joining the conversation. I urge our listeners to get in contact with Intel. See how you can partner together. And also take a look at some of the reports out of CCS Insight, like I mentioned the recent private mobile networks report that just came out. Because we’ve talked a lot in this conversation, but I feel like we’ve barely scratched the surface of what’s going on and what’s still to come. So dive deeper into some of those reports to see what other innovations and trends are happening. And, again, thank you both for joining us. So, until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

AI-Powered Medical Imaging Solutions Advance Healthcare

The use of edge AI in medical imaging offers the possibility of enormous benefits to stakeholders throughout the healthcare sector.

On the provider side, edge AI imaging can improve diagnostic accuracy, boost physician efficiency, speed case processing timelines, and reduce the burden on overstretched medical personnel. Patients benefit from shorter wait times for their diagnostic test results and a better overall quality of care.

But it can be challenging to develop AI-powered solutions needed to make this promise a reality. The computing requirements to implement edge AI in medicine are high, which has historically made it both difficult and expensive to obtain adequate computing resources. It can also be hard to customize the underlying hardware components well enough to suit medical imaging use cases.

It’s a frustrating situation for anyone wanting to offer innovative AI-enabled imaging solutions to the medical sector—because while the market demand certainly exists, it’s not easy to build products that are effective, efficient, and profitable all at the same time.

But now independent software vendors (ISVs), original equipment manufacturers (OEMs), and system integrators (SIs) are better positioned to innovate edge AI-enabled medical imaging solutions. The prevalence of rich edge-capable hardware options and the increasing availability of flexible AI solution reference designs make this possible.

AI Bone Density Detection: A Case Study

The AI Reasoning Solution from HY Medical, a developer of computer vision medical imaging systems is a case in point. The company wanted to offer clinicians an AI-enabled tool to proactively screen for possible bone density problems in patients so that timely preventive steps could be taken.

They needed an edge AI deployment that would put the computational work of AI inferencing closer to the imaging devices, thereby reducing network latency and bandwidth usage while ensuring better patient data privacy and system security. But there were challenges.

The edge computing power requirements for a medical imaging application are high due to the complexity of the AI models, need for fast processing times, and sheer amount of visual data to be processed.

In addition, special challenges involved developing an AI solution for use in medical settings: an unusually high demand for stability, the need for waterproof and antimicrobial design elements, and the requirement that medical professionals approve the solution before use.

The solution can automatically measure and analyze a patient’s bone density and tissue composition based on the #CT scan data, making it a valuable screening tool for #physicians. HY Medical (Huiyihuiying) via @insightdottech

HY Medical leveraged Intel’s medical imaging AI reference design and Intel® Arc graphics cards to develop a solution that takes image data from CT scans and then processes it using computer vision algorithms. The solution can automatically measure and analyze a patient’s bone density and tissue composition based on the CT scan data, making it a valuable screening tool for physicians.

The solution also meets the stringent performance requirements of the medical sector. In testing, HY Medical found that their system had an average AI inference calculation time of under 10 seconds.

Intel processors offer a powerful platform for medical edge computing, which allows the company to meet its performance goals with ease. Intel technology also provides tremendous flexibility and stability, enabling the wide-scale application of this technology in bone density screening scenarios.

Reference Designs Speed AI Solution Development

HY Medical’s experience with developing their bone density screening solution is a promising story—and one that will likely become more common thanks to the availability of AI reference designs. These reference architectures make it possible for ISVs, OEMs, and SIs to develop medical imaging solutions for a hungry market both quickly and efficiently.

Intel’s edge AI inferencing reference design for medical imaging applications supports this goal in several ways:

Tight integration with high-performance edge hardware: Ensures that solutions built with the reference design will be optimized for computer vision workloads at the edge. The result is improved real-world performance, better AI model optimization for the underlying hardware, and increased energy efficiency.

Flexible approach to AI algorithms: Because different software developers work with different tools, multiple AI model frameworks are supported. Models written in PyTorch, TensorFlow, ONNX, PaddlePaddle, and other frameworks can all be used without sacrificing compatibility or performance.

AI inferencing optimization: The Intel® OpenVINO toolkit makes it possible to optimize edge AI models for faster and more efficient inferencing performance.

Customized hardware support: The reference design also factors in the special needs of the medical sector that require customized hardware configurations—for example, heat-dissipating architectures, low-noise hardware, and rich I/O ports to enable connection with other devices in clinical settings.

The result of reference architectures such as this one is that they shorten time-to-market and reduce the inherent risks of the product development phase, giving innovators a clear path to rapid, performant, and profitable solution development. That’s a win for everyone involved—from solutions developers and hospital administrators to frontline medical professionals and their patients.

The Future of AI in Medical Imaging

The ability to develop innovative, tailored solutions quickly and cost-effectively makes it likely that far more AI-enabled medical imaging solutions will emerge in the coming years. The potential impact is huge, because medical imaging covers a lot of territory—from routine screenings, preventive care, and diagnosis to support for physicians treating diseases or involved in medical research.

Hospitals will be able to use this technology to improve their medical image reading capabilities significantly while reducing the burden on doctors and other medical staff. The application of edge AI to medical imaging represents a major step forward for the digital transformation of healthcare.

 

Edited by Georganne Benesch, Editorial Director for insight.tech.

5G Private Networks Close the Connectivity Gap

In almost every sector, there is a push for digital transformation—from manufacturing, to healthcare, smart cities, and beyond. Fast, reliable data transfer at the network edge is essential to these efforts. But high-capacity networking is challenging in large, widespread environments or remote operating areas, where traditional wired and wireless network solutions fall short.

“Wi-Fi networks leave coverage gaps and cause latency issues,” says Raymond Pao, Senior VP of Business Solutions at HTC, a provider of connected technology, virtual reality, and 5G networking solutions. “And while commercial 5G networks undeniably offer excellent speed and bandwidth, in many cases they aren’t a viable alternative due to the need for dedicated connections, locality, or security concerns.”

The good news is that private 5G networks deliver high-bandwidth, low-latency connectivity in such scenarios. They offer dedicated, customizable, secure, and performant networks that enable a wide range of digital transformation applications in challenging edge environments. And now, private 5G solutions based on open software and networking standards can help companies deploy applications faster.

Private #5G solutions based on open #software and #networking standards can help companies deploy applications faster. @htc via @insightdottech

Private 5G Enables Factory AGV Solution

HTC’s deployment at a factory in Taiwan is a case in point. A maker of high-end digital displays wanted to implement autonomous guided vehicles (AGVs) in their manufacturing facility. But the proposed solution required seamless network connectivity over a large working area.

The company explored the possibility of using multiple Wi-Fi routers to build a network large enough to cover the entire factory floor. But this approach was ruled out because latency issues would often cause AGVs to stop during handoff between access points. In addition, the Wi-Fi network was not always reliable, leading to concerns over downtime.

Working with the manufacturer, HTC set up a dedicated 5G network to deliver the high-capacity, high-performance connectivity needed to run the AGV solution. Post-deployment, the manufacturer found that the network more than met their needs—and led to significant cost savings as well.

“The integration of AGVs and private 5G networking provides the real-time data needed to improve decision-making and streamline the flow of materials within the factory,” says Pao. “Because of this, our client improved its operational efficiency and has substantially cut down on labor expenses.”

All-in-One Hardware and a Collaborative Approach

It would be wrong to imply that setting up a 5G network is ever easy—but all-in-one hardware offerings and the collaborative approach of providers like HTC help to simplify the process.

For example, HTC’s Reign Core series, a portable networking system that the company describes as “5G in a box,” provides all the necessary physical infrastructure to implement a private 5G network in a compact, 20kg hand-carry case.

The company also offers extensive support to systems integrators (SIs) and enterprises looking to develop custom 5G-enabled applications. This includes an initial needs assessment, help with building and testing a proof-of-concept system and software applications, and optimizing the solution to scale deployment.

HTC’s 5G Reign Core solution is also compliant with 3rd Generation Partnership Project (3GPP) mobile broadband and O-RAN ALLIANCE standards. This facilitates the incorporation of components from other vendors that build to the same standards, allowing for more flexible solution development and greater customization. For those developing virtual reality (VR) applications based on HTC’s VIVE VR headsets, the company also grants access to their proprietary VIVE Business Streaming (VBS) protocol for optimized data transfer.

The combination of flexible, self-contained infrastructure, extensive engineering support, open standards, and access to proprietary protocols enables businesses and SIs to create a wide range of 5G-powered use cases—from ICT in manufacturing to VR applications in training, design, and entertainment (Video 1).

Video 1. 5G private networks enable VR for manufacturing, training, design, and other use cases. (Source: HTC)

Partner Ecosystem Drives 5G Transformation

Private 5G solutions enable digital transformation across many industries. In large part, this is due to the mature ecosystem of technology partnerships that support them.

HTC’s partnership with Intel is a good example of this. “We use the Intel® FlexRAN reference implementation to handle processing in our baseband unit (BBU),” says Pao. “FlexRAN efficiently implements wireless access workloads powered by Intel® Xeon® Scalable processors, giving us flexible and programmable control of our wireless infrastructure.”

By building within the FlexRAN partner ecosystem, HTC also gains access to a wide network of potential hardware providers, including server and radio unit vendors. This makes it straightforward for the company’s engineers to develop customized solutions when working with SIs, regardless of the vertical they’re selling to.

This is one reason the company foresees potential 5G networking applications in sectors such as logistics, defense, and aerospace—and a far more connected world in the years ahead.

“Digitization is happening in every sector, so wireless communication will become much more important in the future,” says Pao. “For customized use cases that demand secure, high-bandwidth, low-latency connectivity, private 5G is going to be a powerful force for digital transformation.”

 

Edited by Georganne Benesch, Editorial Director for insight.tech.