Game-Changing Tech Takes Event Experience to the Next Level

Pulling off an event like the Olympic and Paralympic Games involves intricate behind-the-scenes work. From connecting people through private 5G platforms to creating virtual experiences and using AI and digital twins for planning and execution, expertise and reliable partnerships are crucial.

This podcast explores how advanced technology is leveraged to create interactive and immersive event experiences, the essential partnerships involved, and a forward-looking perspective on future innovations in event management.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guest: Intel

Our guest this episode is Sarah Vickers, Head of the Olympic and Paralympic Program at Intel. Sarah has been working on Intel’s Olympic and Paralympic program for about seven years. She’s responsible for all aspects of the games, including operations, guest experience, and ecosystem support.

Podcast Topics

Sarah answers our questions about:

  • 1:13 – Intel’s involvement in the Olympic and Paralympic Games
  • 2:58 – Event preparation before, during, and after the event
  • 7:01 – The process of launching a large-scale event experience
  • 8:32 – What happens with all the data after the event
  • 10:31 – New technology innovations for event experiences
  • 11:58 – The value of Intel’s ecosystem and partnerships
  • 12:48 – Applying Intel technology beyond the Olympics

Related Content

For the latest innovations from Intel, follow them on Twitter at @Intel and on LinkedIn.

Transcript

Christina Cardoza: Hello, and welcome to insight.tech Talk, formerly known as “IoT Chat”, but with the same high-quality conversations around Internet of Things, technology trends, and the latest innovations you’ve come to know and love. I’m your host, Christina Cardoza, Editorial Director of insight.tech. And today we’re going to be talking about how technology can uplevel event experiences with Sarah Vickers from Intel.

But as always before we get started, let’s get to know our guest. Hi, Sarah. Thanks for joining us.

Sarah Vickers: Hi, it’s great to be here.

Christina Cardoza: What can you tell us about yourself and what you do at Intel?

Sarah Vickers: So, I’ve been with Intel about nine years, but I’ve been working on our Olympic and Paralympic program for about seven. Currently I’m responsible for all aspects of our Olympic program, which includes games operations, our guest experience, and anything to support the ecosystem.

Christina Cardoza: Great. And of course, the Olympic and Paralympic Games are happening in Paris soon. So, very exciting. I wanted to start the conversation around there. We’re going to be talking about event experiences, but since the Olympics is such a timely event, I wanted to see if you could give us an overview of Intel’s involvement at the event. What motivated you guys to become a technology partner? You said you’ve been doing it for the last couple of years, so how has it evolved?

Sarah Vickers: Sure. One of the things that Intel loves about the Games, is that it is really the largest sporting event, and most complex sporting event, on Earth and has billions of watchers around the world. So it’s a really exciting opportunity for us to demonstrate Intel’s technology leadership in a really scalable way.

We’re not doing this to have proofs of concept; we’re actually integrating our technology to help with the success of the Games. And we think about that in a variety of ways, because there’s so many different aspects to call a Game successful. You’ve got the really complex operations to deliver the Games—moving athletes and fans and volunteers around, getting people from A to B. That’s complex in itself, but do that across 17 days across so many sports. It’s super complex.

You’ve got the broadcast experience—so, billions of people watching at home. That’s just evolved and become more complex when you think about all the different devices and how people consume media. So we do a lot of applications working with Olympic Broadcasting Services to deliver outstanding experiences based on Intel technology.

You’ve got the fan experience, whether that be, again, operationally, ease of getting around, versus actually how do you entertain during the Games. The sports itself provide great sense of entertainment, but there’s all that in-between time. What can we do to help them make that experience even better?

Christina Cardoza: And you talked about how this is such a large event over the course of a couple of days. I can imagine how complex it is during those days, but how is Intel technology being used behind the scenes—not only during the event itself, but how are you preparing before the event, making sure everything is up and running and it’s a smooth experience? And then what happens after the event? Because I’m sure it’s not just for the actual live sessions.

Sarah Vickers: We start working on the Games years before, with the International Olympic Committee (IOC), with the International Paralympic Committee, the organizing committee—in this case, Paris 2024—to really try to understand what are we trying to solve? How can we take what we’ve done in the past and make it better? Or, what are the new challenges that have evolved since the last Games?

So we really work it as a partnership and really think about what are we trying to do. We have taken solutions that we’ve done in Tokyo and made them better. So, in a good example of that is we do what we’re calling digital twinning. Digital twinning is the opportunity to look and have a digital twin of all the venues and really understand what the venues are going to be like in a 3D way.

How this helps is, if you think about broadcasters, they really need to understand where camera placement’s going to be and how that’s impacted by different things. If you think about the transition from the Olympic Games to the Paralympic Games, you’ve got a lot of changes that you need to do for accessibility and things like that for the athletes. This makes it possible to do those things in advance, rather than doing it as it happens and figuring out, oh, this solution actually doesn’t work.

So there’s a lot of benefit to that, as well as just the opportunity to reduce travel. You can do it from anywhere, you can do it from your PC. So it makes it really easy. Another use case that we’re helping out with, from an operational perspective, is really just understanding the data. So there’s a lot of people behind the scenes, right? If you think about all the media that’s on the ground, all the workforce, we’re helping the International Olympic Committee and Paris 2024 understand that people movement to optimize facilities for them.

So that could be either making sure that we’ve got the right occupancy levels, making sure that people have the right exits and entries—really using that data to make real-time decisions based on that data. But what that also does is it helps inform the next Games because they’ve got a base set of data that they can use to help model and plan for those complicated situations.

A final example that I’ll give, just from an operational perspective, is on the athletes’ side. This is the athletes’ moment. For some of them it’s the highest moment in their career. And really what you want to do is make it as uncomplicated as possible. You want them to be able to focus on their performance and not think about the things that they have to think about to get to that performance. So whether that be food, whether that be transportation, whether that be accommodations—there’s so many different things while they’re there that they need to think about.

We’ve worked with the IOC, and we’re implementing a chatbot for them for these Games. So, really a chatbot based on our AI technology platforms. What that’s going to do is it’s going to enable athletes to ask questions, get conversational answers about day-to-day things. And that will continue to get smarter as we get more answers and understand what’s working. So that’s going to be used throughout the Games, which I think is going to be a game changer for athletes.

Christina Cardoza: Yeah, absolutely. And talking about things getting smarter, I think it’s probably so exciting to see how technology has evolved over the last couple of years, and now you’re able to leverage all of these new tools to help make the Games better.

You mentioned digital twins. I’m sure you are using some Intel® SceneScape behind the scenes. And then there’s all this AI and all the processors that Intel has to really make everything, like you mentioned, real time: make sense of the data, make sure that you can make informed decisions in real time. And all of this, I’m sure, is happening at the edge so it is low latency and we’re getting all this information as quick as possible.

You said you guys start preparing for the Games years in advance. I’m sure there’s a lot of planning and preparation that goes into this. Just looking at not only how are we going to integrate all this technology and make it make sense and eliminate some of the silos, but what it looks like during the event, what it looks like after the event. So, how do you start off years in advance? Walk me through the process of getting from: the Games are coming up, this is how we prepare, and then this is how we launch it.

Sarah Vickers: Really what we do is we sit down and say, “What are the things that need to be delivered?” Right? There’s a set of expectations for every Games, and then there’s that set of expectations of what do we want to do that’s different? And it’s really a process where we sit down and we ask those questions: both, what are you trying to solve, what are you worried about; and what are the things that need to happen?

And then we do an assessment and say, “How can Intel’s technology help?” And we work very closely with a number of partners to try to figure that out. Then we develop a roadmap of solutions. And then we have—for each roadmap of solutions,—it’s typical technology integration, where then we have a plan and a PM that works closely with those stakeholders to deliver that.

Some of those solutions are delivered in advance. So, digital twinning for example, that’s not really—the benefit of that is not during the Games. The benefit is really before the Games and months before the Games. So that solution’s been being used over a big period of time. And then you’ve got other solutions that are obviously for during the Games. So it really depends on the technology integration of what that process looks like. And then hopefully during the Games everything goes smoothly, and we just can enjoy it and watch our technology shine. But we have staff on site to make sure that everything runs smoothly and goes off without a hitch.

Christina Cardoza: Is there anything that happens after the Games? Any more work that’s being done on Intel’s side to make sure that if there were recorded sessions or recorded games, or anything that we point to post-event?

Sarah Vickers: I think if you think about what happens for the Games, there’s so much data, right? So there’s so much data, and data means so many different things. So you’ve got content, right? When you’ve got broadcast data, you’ve got all the highlights and all those things that are being done. You’ve got all the data that we’re helping the IOC collect to understand people movement and things like that. So that data is definitely being used to help plan the next set of Games. When you think about broadcast, that broadcast information is being used to create models and understand for future Games as well, or future entertainment.

One of the really interesting use cases that we’re working on with Olympic Broadcasting Services is AI highlights. We are actually creating highlights using artificial intelligence platforms, and that’s going to help create highlights that just weren’t possible before because they were all generated by people, and there were only a certain amount of people that could do that over time.

But if you think about what we talked about earlier, where how people consume broadcast is changing, people are much more demanding on their expectations of broadcast and want things that are a little more personalized. And you’ve got 206 different countries participating in the Games, multiple languages, multiple sports. And there’s countries where certain sports are really important and that aren’t important—some of the bigger countries that you would see that usually dominate this space.

So what the AI highlights can do is generate highlights that are really customized based on certain things. This is really exciting, and we’re going to see this evolve over time, because what will happen is the models will learn over time and they’ll get smarter, and then you’re going to have even better and more awesome highlights for the fans.

Christina Cardoza: Yeah, I was going to ask if there were any lessons learned that you have experienced over the last couple of years that you’re bringing into this event, or if there’s any new technologies and innovations out there that you’re excited to use. It sounds like it’s AI and digital twinning this year. Is there anything you wanted to add to that?

Sarah Vickers: I mean, I think when you think about AI and Intel and the whole idea behind “AI Everywhere,” really this is really excellent grounds to demonstrate how Intel’s AI platforms will really change a lot of aspects of the Games. So we’re really excited about a lot of our activations that are demonstrating what we can do with AI, and I think what’s happened over time is just technology and AI have gotten smarter; it’s becoming more mainstream. So you’re just going to see more of that, because that’s what the expectations are. And we can use that data—the compute is possible now—to build those models. So we’re going to have a lot of different AI applications throughout the Games.

Christina Cardoza: It’s interesting looking at an event at such a global scale, because at insight.tech we write a lot about Intel® Partner Alliance members, how they partner together with Intel to make various different things happen: digital signage in stores, the data analytics, the cameras, the people occupancy. It sounds like all of this is happening at the event. So, all of these technologies that we’ve been talking about, that our partners are working with Intel to make happen, it’s such a scale that it’s this end-to-end solution. Everything is happening at the Olympics: the networking, the real-time analytics, everything at the edge.

So I’m curious, what is the value of Intel’s ecosystem and the partnership to make something like the Olympics and the Paralympics happen?

Sarah Vickers: Intel doesn’t do things alone, right? Like you said, we rely on strong partnerships to help deliver that. We really work and try to understand what solution is best and then work with that ecosystem to help deliver that. And that can be a variety of types of partners. So, we have the lucky opportunity to work with some other top Olympic partners, and then we work through some of our other partners at the local level, and we work across our ecosystem to help make this happen. We definitely cannot do it alone.

Christina Cardoza: And of course we’ve been talking about all of this technology in context of the Olympics and Paralympic Games, but there are other events and other use cases I think some of this could be applied to. So I’m curious, how can Intel technology be used beyond the Olympics? What are some other industries or sectors that you see some of the things that you’ve been doing to prepare during, before, and after the event in other areas?

Sarah Vickers: Sure. I think there’s—almost every application that we have—there’s an application for that both at other events, but also beyond sport. So I think the way we think about it is, how does this demonstrate what we can do, and then how does that scale?

I’ll give another example of a use case that we’re doing that’s a really fun application of AI platforms, which is really what we’re calling AI Talent Identification. We are using AI to do biomechanical analysis to help fans that are going to be at athletics and rugby understand which Olympic sport they’re most aligned to. So they’re going to do a bunch of fun exercises, we’re going to mash up that data, and then tell them, “Okay, you are most likely to do this.” And that’s just a fun application of AI.

But if you think about what that biomechanical analysis can do, that can be used in a variety of ways. If you think about physiotherapy, if you think about occupational health, there’s a lot of different ways that this can help improve people’s lifestyles that you can use this same application. You think about digital twinning—that application has gone beyond, and you’re seeing a lot of that in manufacturing, in cities, in all of these different aspects that this type of technology will have that opportunity to help benefit the outcome of whatever their goals may be.

Christina Cardoza: Yeah.  That reminds me of the demo Pat Gelsinger did last year at Intel Innovation, where he was trying to—I believe it was being a soccer player and learn how he could improve his skills using AI and some of these biometrics. So it’s great to see that from last year how it’s advancing, and how it can actually be used in the real world, and how it is actually being implemented in some of these areas. So, exciting to see this technology.

I’m curious—I know we’ve covered a lot about the Olympic Games, are there any key takeaways that you think our listeners should know about doing an event at such scale using Intel technology? Any final thoughts you want to leave us with today?

Sarah Vickers: The Games are going to be a massive event, and in this post-pandemic era I think we’re all excited to see the Games back to their glory, where there’ll be fans in the stands. It’s really exciting, but it’s obviously very complex. Paris is a giant, complicated city without an Olympic Games or Paralympic Games, and so bringing that on is going to be really hard. But by working with Intel and trusting with your partners we can help develop the solutions to deliver an amazing Games. And we’re really excited to be a partner of the International Olympic Committee and the International Paralympic Committee to help make these Games the best yet.

Christina Cardoza: Absolutely. Well, I can’t wait to see the Games in action and some of this Intel technology we’ve been talking about. I invite all of our listeners, if you have any questions, or are looking to partner with Intel and leverage some of this technology in your own event experiences, to visit the Intel website and to keep up to date on insight.tech where we’ll be continuing to cover some of Intel’s partners and what Intel is doing in this space.

So I want to thank you again, Sarah, for joining the podcast today, as well as our listeners. Until next time, this has been “insight.tech Talk.”

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Industrial Machine Vision for Manufacturing and Smart Cities

Machine vision applications promise greater efficiency, safety, and profitability—particularly for the industrial and smart city sectors—due to their ability to enhance inspection and quality control.

In factories, for example, automated optical inspection (AOI) can reduce manufacturing errors and increase productivity. And vision-based systems in smart cities can provide safer streets and better urban traffic control. But despite the broad range of potential use cases, these solutions can be difficult to implement.

“Industrial and outdoor urban environments are harsh, making it difficult to deploy industrial machine vision solutions in those settings,” says Kevin Lee, Senior Business Development Manager at Portwell, an industrial computing specialist that manufactures compact IPCs for machine vision applications. “In addition, there are tremendous demands for reliability and some strict space constraints in many industrial and smart city use cases.”

The good news is that modern embedded industrial PCs (IPCs), like Portwell’s WEBS-89I0, offer a computing platform that makes it possible to deploy machine vision solutions in even the most challenging scenarios. Rugged, flexible, and adaptable, these powerful edge computing platforms enable a range of new applications and already deliver value in multiple markets.

Embedded IPCs Unlock Machine Vision Benefits Worldwide

Portwell’s deployments in the EU and APAC regions are a good example of this.

In Japan, a large construction firm was looking for an automated solution to inspect and monitor building projects remotely. The company wanted to achieve technical oversight of field operations without the cost and inconvenience of sending an engineer or technician to the build for manual supervision. But the environmental conditions were challenging, with temperatures on-site ranging from 5C to 45C.

Modern embedded industrial PCs offer a computing platform that makes it possible to deploy #MachineVision solutions in even the most challenging scenarios. @Portwell_US via @insightdottech

Portwell helped the company set up a remote monitoring solution based on WEBS-89I0, its fanless box PC, which could withstand the rigorous operating environment while ensuring the reliability of the system. Cameras installed on-site would help to supervise operations to ensure that proper procedures were being followed and that the project was proceeding as planned, with the IPC doing the preprocessing and then transmitting relevant data to the company’s Microsoft Azure cloud for further analysis.

After implementation, the firm had achieved the level of oversight required, and no longer had to spend time and money sending skilled supervisors to the job site.

In a second Portwell deployment in the Netherlands, a system integrator was attempting to implement a smart city solution for a municipal government. The SI and local officials were concerned about safety and security on the city’s streets, and wanted to develop an automated surveillance system to detect dangerous situations and alert the authorities when necessary.

But due to the setting, the environmental constraints were extremely challenging. Reliability was also a concern, as the potential for equipment damage to an outdoor solution was high, and it would be inconvenient and costly for the SI to send an engineer to repair a computer in the field.

Portwell helped the system integrator develop a machine vision security system using its fanless embedded industrial PC as the edge computing platform. WEBS-89I0’s fanless design was chosen to reduce the probability of malfunction, since PC fans are the component that breaks most frequently when a computer is in constant operation. With this, a network of security cameras was set up around the city. Cameras were connected to the embedded IPC for edge analysis, with algorithms programmed to detect behavior that would raise an alert. The IPC’s built-in SIM card slot also made it possible to route data to a remote control center over local cellular networks.

Once deployed, city officials had their desired computer vision-based security solution—one that would require minimal maintenance and upkeep in the future.

Industrial Machine Vision: Flexibility and Reliability Speed Time to Market

Obviously, major differences exist between roadside traffic control systems, industrial AOI, and smart city safety solutions. The key to an embedded IPC platform that facilitates rapid development of diverse applications is flexible design and reliable, high-performance edge computing.

Portwell’s WEBS-89I0 embedded industrial computer, for example, offers a number of design features that make it easier for engineers and SIs to build for custom use cases.

Multiple USB and Gigabit Ethernet ports enable engineers to connect the WEBS-89I0 IPC to standard hardware devices like cameras; RS-232 and RS-485 ports offer extra connectivity for industrial equipment; and dual output ports provide a way to link the computer to displays. In addition, the computer’s compact footprint—a palm-sized 138mm x 102mm x 48mm—means it can be embedded into almost any solution without significantly increasing the overall size.

On the reliability front, Portwell’s technology partnership with Intel has been of great help in developing its embedded industrial PC. “Intel processors provide the balance of performance, stability, and energy efficiency needed to develop embedded applications,” says Lee. “Our partnership with Intel also gives us early access to next-generation processors, which helps us deliver market-leading solutions to our customers.”

For enterprises and SIs attempting to develop industrial machine vision solutions, this blend of powerful, reliable computing and flexible, adaptable design makes it easier to get to market faster—even when building highly customized solutions for buyers.

Collaboration Enables Wide Range of Industrial Machine Vision Apps

It seems likely that organizations in nearly every sector will look to incorporate computer vision technology into their operations in the years ahead.

In part, this is because implementation is now easier than ever, as modern AI technologies solve machine vision engineering problems more efficiently than older approaches.

“In the past, something like a factory AOI system for defect detection would have been tremendously complex to build using traditional programming methodologies,” says Lee. “But given the state of AI computer vision today, such a system can be designed and implemented far more quickly.”

In the smart city and industrial sectors in particular, availability of rugged, powerful embedded IPCs should help overcome adoption hurdles.

“Everyone in the smart city and industrial space wants machine vision applications because the business case is so clear,” says Lee. “But until recently, the biggest challenge was finding a suitable edge computing platform to implement those solutions. We believe we’ve overcome that obstacle.”

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Build AI Applications Faster with a Low-Code Platform

Whether the goal is to speed office tasks or impress customers with chatbots, today’s businesses are increasingly eager to deploy AI applications.

Once launched, AI applications can be a boon to productivity. But creating them can be a time sink, especially for generative AI solutions, which are powered by large language models and image recognition systems that require extensive fine-tuning and testing.

Now there’s a better way to bring AI solutions to fruition. Using a low-code platform, businesses can develop custom AI applications much faster. Low-code applications are more straightforward to maintain and customize to accommodate future use cases.

Simplifying AI Solutions Development

In the competitive world of AI applications, timing is a critical factor, says Brian Sathianathan, Co-Founder, Chief Digital Officer, and Chief Technology Officer at low-code AI platform developer Iterate.ai. “A lot of companies want to be the first to market with innovative solutions. But it’s hard to do because their IT and technology teams already have their hands full,” he says.

Sathianathan and his colleagues founded Iterate to simplify the AI application-building process, shortening development time from months to weeks. “On average, it’s eight or nine times faster to take an AI idea from concept to reality,” Sathianathan says. “Less complex AI solutions can be created up to 17 times faster.”

Iterate simplifies the #AI application-building process, shortening #development time from months to weeks. @IterateAI via @insightdottech

Iterate saves time by creating pre-written blocks of code for various AI capabilities, such as chatbots, payment systems, or image recognition. Using the company’s Interplay platform, developers can drag and drop the code blocks into their solutions.

“It’s like building a luxury home from parts delivered on a truck,” Anton says. “We send you entire kitchens, bedrooms, and bathrooms, and you can put them together very quickly.” The code blocks are grouped into customized solutions for industries such as finance and insurance, retail, and automotive.

Saving Time with a Low-Code Platform

Interplay’s enterprise office solution, GenPilot, allows organizations to build their own generative AI large language models (LLMs) from internal data and documents. Many LLMs specialize in tasks, such as financial planning or logistics management, and GenPilot allows companies to select the models they prefer. Though public LLM solutions such as Chat GPT and Microsoft Copilot can also be used for generative AI solutions, some companies hesitate to upload their information to them.

“Public models are shared in a multi-tenant cloud environment. We provide a secure private environment, and companies can run their models on-premises,” Sathianathan says. Banks, insurance companies, and other organizations can also build in compliance rules governing data in various regions.

For employees, GenPilot saves hours of time by gathering and interpreting documents across databases. For example, if an insurance customer emails a company representative with a question, but neglects to supply their policy number, GenPilot will not only find it but determine how the policy applies to the question, how much the customer pays for services, and whether a change would alter the fees. It then composes a reply to the customer’s email.

“It responds intelligently in plain English,” Sathianathan says. Companies can set rules governing tone of voice and level of technicality.

For unstructured documents, such as PDFs, employees can use a different solution, the Interplay OCR Reader. This application translates images into machine-readable text and initiates workflows. For example, when bank employees upload customers’ scanned documents to the OCR Reader, it extracts relevant information and enters it onto a loan application form.

Streamlining Retail AI Management

One of Iterate’s latest solutions is Interplay Drive-Thru, which builds voice-enabled chatbots to take customer orders and make upselling recommendations at busy quick-serve restaurants (QSRs).

Chronic labor shortages often require QSR workers to perform multiple duties, packaging food, taking payments, and serving in-store customers as well as those at the drive-thru. “Chatbots give them a little more breathing room,” Sathianathan says. Orders are processed faster, shortening lines for customers and increasing throughput for restaurants.

Drive-thrus and other retailers can speed payments with Interplay’s LPR (license plate recognition) solution. Customers who opt in supply a photo of their license plate and credit card, and are recognized by computer vision cameras as soon as they arrive at a participating business. Interplay LPR, which complies with GDPR and other privacy regulations, is currently deployed at more than 1,000 gas stations and convenience stores in Europe.

“It will automatically open the pump for customers and charge them for gas. These actions happen within 30 milliseconds,” Sathianathan says.

Interplay’s LLM solutions are deployed on Intel® processors. Applications that run on high-performance CPUs are more cost-effective for businesses than those that also require GPUs, as many LLM solutions do.

“A system using only CPUs cost $2,500 to $4,000 per machine. An equivalent GPU/CPU combination would be $8,000 to $12,000,” Sathianathan says. Retail IT teams are also more familiar with standard operating systems, reducing training time.

Once a low-code solution is deployed, developers can easily move the same Interplay code blocks into new solutions, instead of having to sort through millions of lines of code to make changes. In addition, Interplay’s code blocks use the Intel® OpenVINO toolkit, enabling developers to optimize their AI applications more efficiently. “You can use up to 350% less compute power with OpenVINO. That’s a huge benefit,” Sathianathan says.

Bright Future for Low-Code AI Solutions

Today’s AI applications enable companies to automate processes in ways that would have been unthinkable just a few years ago, Sathianathan says. “AI solutions can do sales calls. They can generate legal documents, which are traditionally expensive to produce.”

Using low-code building blocks, small businesses as well as large enterprises can develop solutions like these quickly and affordably. That will help to expand the reach of AI applications and level the playing field, Sathianathan says: “Very soon you will see many new automation capabilities being developed. Startups will be able to punch above their weight, and costs will continue to come down for everyone.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Built-in Functional Safety Speeds Robot Development

In today’s factories and warehouses, robots are no longer fenced off from humans. The two often work side by side, with robots taking over arduous tasks like transporting heavy objects—or tedious ones, like spray-painting or palletizing goods.

These collaborative robots, or “cobots,” increase efficiency and reduce the risk that workers will develop muscle strain or injuries. But ensuring that they interact with people safely is not an easy accomplishment. Robot developers can spend years building, testing, and retesting safety features, which must meet rigorous certification requirements. That delays them from releasing models with the latest and greatest capabilities and leads to longer time to revenue.

But now there’s a way for machine builders to bring their robots to market sooner. Building them with pre-certified processors, control boards, and electronics can spare them months or years of extra work.

Faster Robot Development

Critical systems like robots that work in cooperation with humans must include robust functional safety (FuSa) controls. FuSa is an international standard methodology for automatically detecting and mitigating electronic system malfunctions in critical systems—in this case, malfunctions that could harm people. For example, if a robot’s FuSa system indicates that it is veering off-course or traveling too fast, it will send a signal to stop all moving parts.

To gain approval for their cobots, developers must build in FuSa controls for every action they perform that could affect people. Their traveling velocity, the amount of force they use to grasp an object from a human hand, the torque they exert in rotation—these and many other variables must meet exacting ISO standards, and sometimes country-specific ones as well. Both hardware and software related to functional safety must obtain certification.

For the hardware, each of the thousands of electronic components in a robot’s embedded computer must obtain certification from a qualified institute. If a developer builds robots from scratch, the process can take several years. “That’s why we created a safety controller that uses pre-certified CPUs,” says Weihan Wang, Manager of Robot Products at NexCOBOT, a NEXCOM International Co., Ltd. company and developer of intelligent robot control and motion control solutions.

The NexCOBOT SCB 100 safety control board contains pre-certified Intel Atom® x6000 series processors, saving time for both NexCOBOT and its developer customers. “We don’t need to prove the CPU is safe because Intel has already done that,” Wang says. In addition, the entire SCB 100 board itself is FuSa certified.

Along with its silicon and software, Intel provides technical documentation such as safety manuals, safety analysis, and user guides, which also makes the certification process faster and simpler.

With all the hardware pre-certified, robot builders using the SCB 100 board can develop their applications immediately, instead of waiting for hardware approval first. They can further speed software development using built-in Intel software libraries, which enable them to easily import existing applications and develop customized safety protocols for capabilities to fit specific customer needs.

As #robots learn to do more, their interactions with humans start to look less like command-and-control and more like teamwork. @NEXCOMUSA via @insightdottech

Ensuring FuSa for Critical Systems

The SCB 100 control board safeguards robot actions with Intel® Safety Island (Intel® SI), which is integrated into the Intel processor. Safety Island supports functional safety, orchestrates Intel-on-chip diagnostics, reports errors, and monitors customer-safety applications. When a robot is in operation, Safety Island constantly checks its calculations for direction, speed, force, and other factors in real time to make sure it operates correctly. “There are over a hundred different issues that could cause a problem, including a power deviation or a memory failure,” Wang says.

If a safety error occurs, the system brings the robot to an immediate halt and sends feedback about the problem to the operator’s systems integrator.

The processors have the performance power to run multiple AI and computer vision workloads—combining non-safety motion control with safety applications. This allows developers to build in more functionality while saving space and money. The result is a lighter, more compact robot that is easier for customers to install and deploy in tight spaces.

The Future: Robots as Partners

As robots learn to do more, their interactions with humans start to look less like command-and-control and more like teamwork. For example, instead of using a handheld safety pendant for robot training, a developer may directly hand the robot an unfinished part, then guide it to a CNC machine to deposit for contouring.

“In the future, there will be more and more collaboration between humans and robots,” Wang says. In the next five to 10 years, he expects to see “humanoid” robots—with artificial arms and legs—working alongside people in factories, shops, and warehouses.

The more duties robots assume as they work with humans, the more built-in safety they will need. Regulators who once required developers to provide two or three FuSa controls are already asking for more than 30, Wang says. As more-capable robots march onto factory and warehouse floors, the pressure to release models with advanced features will increase. Using a pre-certified safety control board will help developers bring highly complex models to market faster.

Using high-performance chips will help, too, Wang says, adding, “High-end computing performance allows robots to execute lots of safety functions, and they can do it without using multiple CPUs.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

AI in Healthcare Advances Diagnostic Accuracy and Speed

AI in healthcare is changing the face of diagnostic medicine, helping doctors work more accurately to improve patient outcomes.

Use of edge AI in endoscopy procedures is a prime example. Endoscopy involves inserting a tube with a camera (endoscope) into the body to obtain images or video of the patient’s organs and tissues. Endoscopy procedures have multiple uses, but among other things they are a vital diagnostic tool to help gastrointestinal (GI) medicine specialists screen for cancer. Endoscopies allow these physicians to detect polyps, benign but potentially problematic growths, and in particular, adenomas, which are polyps that doctors consider precancerous.

But even the most experienced doctors may be challenged to reliably interpret images from an endoscopy.

“The medical literature tells us that physicians fail to spot polyps during colonoscopies at a rate of 22% to 28%,” says Sabrina Liu, Product Engineer at ASUStek Computer Incorporation, a global developer of diversified computing products. “It’s inherently difficult work: Some adenomas are extremely small and hard to see while polyps have different morphologies that can make them easy to miss on a video feed.”

In addition to the technical challenges of endoscopies, there are also basic human limitations. For example, a doctor at the end of a long shift might be more fatigued and prone to mistakes than at the start of the day. And a junior clinician is unlikely to be as proficient as a more experienced colleague at interpreting medical imagery.

Today’s innovative solutions use edge AI and computer vision to enhance traditional endoscopy equipment. And these systems have already been deployed in real-world clinical settings—with promising results.

Clinical Deployments Demonstrate Improved Accuracy

The ASUS Endoscopy AI Solution EndoAim, currently used at multiple hospitals in Taiwan, is a case in point.

The system highlights AI-detected polyps on the screen in real time by analyzing up to 60 images per second, calling attention to anything the physician may have missed. If they want to inspect a region of interest more closely, they can switch to narrow-band imaging (NBI) and the system will automatically classify selected polyps as adenomas or non-adenomas. Doctors can also use the system to perform one-click measurements of polyps, whereas before they typically determined polyp size by visual judgment, which had a relatively low accuracy of approximately 62.5%.

The results of the solution in clinical settings are impressive. “Physicians have seen their adenoma detection rates improve by 15% to 20% on average,” says Liu. “There is also a significant improvement in detecting small polyps—as well as time savings, because doctors can now measure polyps more quickly and accurately during endoscopies.”

Using #EdgeAI to improve the accuracy and diagnostic consistency of endoscopies will likely appeal to many physicians—and the physical features of these systems add further incentives for adoption. @ASUS via @insightdottech

AI Toolkits, Edge Hardware, and Collaboration Speed Time to Market

Using edge AI to improve the accuracy and diagnostic consistency of endoscopies will likely appeal to many physicians—and the physical features of these systems add further incentives for adoption.

EndoAim is based on a miniature edge PC with a compact form factor of 12cm x 13cm x 5.5cm—a critical consideration in hospital examination rooms where space is at a premium. In addition, the system can be connected to existing endoscopy equipment without specialized medical hardware, making it easier and more cost-effective for clinicians to begin using AI immediately.

The ASUS partnership with Intel was crucial in developing a market-ready product. “Intel CPUs with integrated graphics processing helped us reduce our solution’s overall size—and achieve an image analysis rate of 60 FPS, which is the highest rate currently available to physicians,” says Liu. “Using the Intel® OpenVINO™ toolkit, we also optimized our computer vision models, enabling them to run more smoothly and efficiently.”

The two companies’ collaboration shows how technology partnerships make it possible to offer powerful solutions to medical device buyers—and do it faster than ever before.

“We started work on EndoAiM in 2019 and had an early model toward the end of 2020, which is when we turned to Intel for engineering support,” says Liu. “By 2021, we had the version of the product that we wanted to take to market.”

The Future of AI in Healthcare: GI Medicine and Beyond

The fact that solutions providers can innovate edge AI systems more quickly and effectively is good news for doctors, patients, and healthcare SIs, as it will no doubt enable other use cases in coming years.

ASUS is already at work on some of those new use cases with its current endoscopy system. Liu says the company plans to expand its computer vision solution to other aspects of gastrointestinal medicine, such as the analysis of imagery from the upper GI tract and the stomach. In addition, ASUS engineers are looking at ways to use AI to build solutions that go beyond detection and diagnostic support and enable the prediction of illness, helping doctors to catch potential problems earlier so patients can begin treatment sooner.

Beyond GI medicine, the underlying computer vision algorithms behind EndoAiM could eventually be applied to other types of medical imaging. “We see the potential to expand this technology to analyzing imagery from ultrasounds, X-rays, MRIs, and more,” says Liu. “There’s a tremendous opportunity to help people here, and we’re excited to hear from clinicians in different medical fields and see how we can develop solutions to meet their needs.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Digitizing the Manufacturing Supply Chain from End to End

Addressing supply chain inefficiencies continues to be a problem for manufacturers. Legacy systems and information silos cause inventory shortages and production delays.

This podcast explores how digitizing the manufacturing supply chain, from raw materials to delivery, can revolutionize your operations. We discuss how AI, real-time data analysis, and other technologies can optimize performance, unlock valuable insights, and shape the future of supply chain management.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guests: iProd and Relimetrics

Our guests this episode are Stefano Linari, CEO of iProd, a manufacturing optimization platform provider; and Kemal Levi, Founder and CEO of Relimetrics, a machine vision solution provider.

iProd is an Italian startup founded in 2019. There, Stefano focuses on creating software as a service solutions for manufacturing companies of all sizes.

Relimetrics was first established in 2013. At the company, Kemal leads a global team committed to the Industry 4.0 movement and transforming how manufacturers design and build products.

Podcast Topics

Stefano and Kemal answer our questions about:

  • 2:41 – Manufacturing supply chain pain points
  • 4:47 – Supply chain areas ripe for digitization
  • 7:49 – Technologies optimizing supply chain efficiency
  • 11:14 – AI’s role in modernizing the supply chain
  • 12:59 – Real-world manufacturing supply chain efforts
  • 23:00 – The value of leveraging technology partnerships
  • 27:57 – The future of the supply chain from end to end
  • 30:18 – How AI is going to continue to evolve this space

Related Content

To learn more about the manufacturing supply chain, read AI Boosts Supply Chain Efficiency and Profits and  Unified Data Infrastructure = Smart Factory Solutions. For the latest innovations from iProd, follow them on LinkedIn. For the latest from Relimetrics, follow them on Twitter/X at @relimetrics and on LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to “insight.tech Talk,” formerly known as IoT Chat but with the same high-quality conversations around IoT technology trends and the latest innovations. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re going to explore digitizing the manufacturing supply chain with experts from Relimetrics and iProd, but as always before we get started, let’s get to know our guests. We’ll start with Kemal from Relimetrics first. Please tell us about yourself and your company.

Kemal Levi: Hi, I am Kemal Levi, Founder and CEO for Relimetrics. We enable customers with a proven, industrial-grade product suite they can easily use to control and automate quality assurance processes across use cases with no code. And using our product our customers are able to build, deploy, and maintain mission-critical AI applications on their own, in conjunction with any hardware. This can be done both on-prem or in the cloud, and a key industry challenge that our product repeatedly succeeds in tackling is our ability to adapt to high production variability, which is commonly experienced in today’s manufacturing.

Christina Cardoza: Great. Looking forward to getting into that and how that is going to impact the supply chain or bring benefits to the supply chain. But before we get there, Stefano Linari from iProd, please tell us about yourself and the company.

Stefano Linari: Hello, I am Stefano, Stefano Linari. I am the Founder and CEO of iProd. iProd is an Italian startup founded in 2019 to create the first holistic tool designed for manufacturing companies of each size, accessible for free, and as a software as a service. Our user can leave tons of purely integrated software like ERP, Amira, CRREM, IoT platform and use just one modern cloud platform, our platform.

Christina Cardoza: Awesome. So, I wanted to start off the conversation just getting the state of things right now. Obviously a couple of years ago the supply chain was headlining in the news almost every day for weeks on end—just the challenges and the obstacles. But I feel like there’s been a lot of integration and advancements in the technology space, that those pain points we were feeling a couple of years ago we have been able to get over a little bit.

But I’m curious what challenges still remain or where are the pain points today. Stefano, if you want to talk a little bit about what’s going on at the manufacturing and supply chain level.

Stefano Linari: Yeah. This supply chain unfortunately is still purely integrated, especially for its more-than-medium enterprises where digital tools are not updated and easy to be integrated because they are legacy technology. We are far away from the concept of this so-called manufacturing as a service, where the manufacturing capabilities are accessible in a fluid way. This part of the ask for a highly integrated, multi-tier supply chain, able to digitally orchestrate and provide a custom-made piece optimizing cost, impact, and user resources.

Unfortunately, even on the other side of this supply chain, if you look at the OEM we face other issues. And the companies are not able to serve the new part of this for their industry that is the machine customer, where a product, digital product, is able to purchase autonomously spare parts and accessories from the OEM itself and even from third parties. For example, a turning machine that after digitalization can work a belt or a gear after several number of working hours. This is still far away from the reality.

Christina Cardoza: Yeah, you make some great points there Stefano, and one thing I want to discuss a little further is you mentioned a lot of the problem is that there’s still legacy systems in place, and I’m sure that’s creating a lot of silos that these machines can’t talk to each other. Data is not end to end.

So, Kemal, I’m curious from your perspective where are some areas that manufacturers can start digitizing aspects of the supply chain and how that’s going to help address some of the pain points Stefano just mentioned?

Kemal Levi: First of all, digitizing aspects of manufacturing helps to trace quality across the supply chain. As parts move along the supply chain, quality automation helps to identify anomalies before they get to the customer and risk downtime. So for the entire supply chain, and particularly for the OEMs, it is really important to trace the quality status of parts or products from a multitude of suppliers and also run data analytics to see which one is actually performing better and read out those vendors who are not performing well.

Now, digitizing aspects of manufacturing also helps to improve the bottom line. So as manufacturers ship products to their customers, they must identify issues with outbound transportation and logistics. So a magnifying lens looking at different points of the supply chain gives better visibility to improve margins, and in the case of the sectors that we typically serve to margins are often razor thin. So maximizing the number of items getting to the end of the manufacturing line that meet the required quality standards has a direct impact on the bottom line.

Another example is that digitizing aspects of manufacturing, helping to make better supply chain decisions and correlation of acquired data across the product life cycle—and this can be all the way from manufacturing to sales to service—enables continuous business intelligence. And a company that can trace quality in real time and do a better assessment on where quality issues originate can ultimately boost profitability.

Christina Cardoza: Yeah, absolutely. I’m glad you mentioned the quality-automation aspect of the supply chain. I feel like sometimes when we talk about supply chain challenges, we are often thinking about deliveries and shipments and getting manufacturing production out the door. But it also starts—it’s an end to end issue—it starts on the factory floor; it starts as you are developing these products, making sure that everything is high quality, that it can go out the door and can be delivered on time. So that’s a great point that you made, and then looking at the different points of the supply chain so that it’s really an end-to-end experience.

Stefano, I’m curious, as we look at quality automation and all of the different parts manufacturers need to be on top of in order to have this end-to-end digitized supply chains, what are the technologies that are being used? Or how can we start enhancing and optimizing supply chain efficiency?

Stefano Linari: From our side, all these things can start from the demand side. If we start to build intelligent machines that can be transformed in a machine customer, we can create a more predictable demand. We can avoid to rush, to produce spare parts and install it in a non-planned way, creating a simple condition to optimize the supply chain. So from our side in these months, in the last year, we are pushing this new part upgrade inside OEMs.

What we have created to support the OEM to handle a new generation of machines that we call “machine customer,” it’s to create a free and self-service interface in the cloud where each OEM can create their rules and their identity—the digital twin of every machine that’s built in a few minutes. Gartner in their last books name it, “When Machines Become Customer,” recognizes our platform as the first machine-customer-enabling platform in the world.

We are then creating the condition to digitalize the supply chain. Because when you speak about potential saving, entrepreneurs are interested, but they are engaged when you tell them about increasing revenue. And with our technology embedding new intelligence on board of the machine, we are transforming our production tool in point of sales. And this is a remarkable shift in the mindset of the OEM that can be easily understandable.

Christina Cardoza: So I’m curious, because we were talking about the legacy systems earlier in the conversation, is this a software approach that we can take to digitizing the supply chain? Or does there have to be investments in new hardware? Or can we leverage existing infrastructure?

Stefano Linari: We have to combine both, because for sure software platforms can make the interface and user experience simple, but we can’t forget that manufacturing tools and equipment, automatic warehouse and production machines are not yet intelligent enough to analyze their needs and try to simply the life of the end user and to the OEM. So we need a combined approach at the moment.

Christina Cardoza: Great. And of course when we are talking about adding intelligence and doing things like quality automation, AI comes to mind. AI seems to be everywhere these days. Kemal, you mentioned you were—you have an AI approach to being able to provide that quality automation and look at different parts of the supply chain. So I’m curious, from your perspective, what is the role that AI should be playing in these supply chain processes?

Kemal Levi: Well, AI in supply chains can deliver powerful optimization capabilities required for more accurate supply chain–inventory management. It can also help to improve demand forecasting, reduce supply chain costs, and this can all happen all while fostering safer working conditions across the entire supply chain. Traditionally the supply chain has relied on manual inspections and sorting.

So I would like to give an example that centers around smart inventory management. So this, this process—the inventory management process—can be labor intensive and prone to error, adding costs to the loss. So today AI-driven quality-automation tools like ReliVision can be deployed without requiring any programming skills or prior machine learning knowledge, and they can offer access to real-time information that can improve efficiency and visibility. Now, similarly, AI can also be used in conjunction with computer vision and surveyance cameras to monitor work efficiency and safety objectively, and provide data-driven insights for businesses to optimize workflows and improve their productivity.

Christina Cardoza: So do you have any customer examples? I know you just provided the inventory use case, but I’m curious if you have any customer examples that you can share with us: how, what problems they were facing, and how Relimetrics came in and was able to help them and what the results were.

Kemal Levi: A good example is renewable energy leaders which engaged with us to help them inspect their wind turbine blades before they’re released to customers. So, using our AI-based quality-automation and non-destructive inspection-digitization platform, our customer is today able to automate the inspection of phased array ultrasonic data and assess the condition of blades before they are placed in the field.

And the main challenge that our typical customer has is to digitize inspections, which is time-consuming and prone to errors, and improve traceability across their supply chains. And with our product our customers can rapidly implement AI-based machine vision algorithms on their shop floor, and they don’t need to write a single line of code while doing this, and they can share, train the models across inspection points and leverage existing camera hardware, irrespective of image modality. Whether it’s infrared, X-ray, or PAUT.

Christina Cardoza: I love the no-code approach that you guys are taking, because I know a lot of manufacturers, they see these benefits, they want to achieve them, but there’s obviously labor shortages happening in the area in their space, and they can’t always have the skills or be able to deploy these as fast but they’d like to get these benefits. So, love seeing how we can make it more accessible.

When you have these no-code solutions, who are the type of users that are able to implement some of these in practice? Do you need those engineers? Or is it really an operator or a manufacturing manager that’s able to take part in this as well?

Kemal Levi: Well, we would like to enable process engineers to be able to build AI solutions, and not only build but also deploy these AI solutions and then maintain them. So what we see is that maintenance of AI solutions can also be quite costly. So we are making it possible for non-AI engineers to be able to maintain AI solutions.

Now we can of course also serve AI engineers as well; we can help them just prototype their AI solutions faster and deploy them to the field. The maintenance piece, again, is typically an important aspect that AI engineers typically would like to transition to operators after they are successful in the field. And this is exactly what we do: we make it possible for maintenance of AI models and training of new AI algorithms for new products, new configurations to be done by non-AI folks.

Christina Cardoza: Yeah. It’s amazing to see how far technology has come, and how non-AI folks can be involved—especially since these people are the ones on the factory floor with the domain intelligence, so they can spot the quality issues or be able to train some of these models better than an AI engineer probably would if they don’t have that deep manufacturing experience.

Stefano, I’m curious, from iProd’s side, what are the solutions and products that you guys have on the market that are helping your customers in these different areas? And if you had any customer examples that you could share with us as well.

Stefano Linari: Yeah, we have several use cases of machine customers spreading from concrete industry, industrial filtration, and manufacturing. But I want to present you the most significant case that was done with Bilia. Bilia SPA is the third-largest turning center builder, and their machines are sold to automotive companies and manufacturers of consumer goods and a lot of industry where metal parts are needed.

Most of those machines you can figure out to be installed in a shop floor, even in small and medium enterprises—you know that in Italy, but in Europe in general, most of the companies are under nine employees. So you can imagine that no expertise in IT can be found in the customer side especially.

So we have enriched, equipped, this turning machine with an external brain so we can go—it’s in a panel PC, technically speaking, but we like to describe it as an IoT tablet to make them more friendly for the end user—and with this tablet we have two connections at the same time: One with the CNC of the machines, and then we can acquire real-time data about usage and consumption of resources. And on the other connections, usually Wi-Fi or forward dealing, we are connected to the iProd cloud.

This solution—it’s a bundled solution, because we have to provide security and trust to the end user that no sensible data about their process and their secret sauce to create the perfect piece are not exfiltrated. Then, in the cloud, Bilia—the manufacturer—with their process engineer and maintenance engineers, using a visual approach as Kemal defined before, so even in this case, no programmer, no coder is needed, but you have a wizard in the cloud where you can simply drag and drop spare parts and services from the Bilia catalog to conditions that can be simple rules: every 1,000 hours, please change the filters or fill the oil. Or forward-looking AI and ML that can predict more accurately what must be changed.

The main point when we start this project is, “Okay, but why does the end customer have to accept that the turning machine will ask him to buy something? I have spent €200,000 for this turn, and every day he asks for more money? Why do I have to pay?” And so it was a bit scary, but the customer not only accepted the recommendation, but they ask the machine for more. They allocated a dedicated budget to the machine itself—usually in the order of €200 per month, no big budget, but in the most efficient area.  Because under this level the machine can automatically place the order, and you receive a notification on your mobile: “Hey Stefano, in a couple of days you will receive the new filter.” Or new belt, and so on. For €50, €60, because most of the spare parts are cheap. But we try to estimate the cost of placing the order and processing the order, and this is never lower than €50 for each side.

So the end user knows that if the machine never stopped and by autonomously the needed the spare parts, consumable, periodic service, he is saving money. And probably the same items purchased in an autonomous way are even cheaper, because on the other side I have to spend time to answer an email, answer phone, send a contract, and blah blah blah. So what was something that at the beginning sounds very difficult to do because the no skill, not very digital guys—it’s a real market success.

Christina Cardoza: Yeah. And I’m sure that is a common scenario in the industry: not knowing where to start, being worrisome of getting started, how much it’s going to cost, how complicated it’s going to be, if it’s going to be wasted effort. So it’s great to see how manufacturers can partner with companies like iProd and Relimetrics to be able to integrate some of this and really make improvements in the supply chain.

One thing that comes to mind—and I should mention, insight.tech, we are sponsored by Intel—but we’re talking about artificial intelligence and the cloud and real-time capabilities and insights into some of these things that I’m sure that you guys are working with other partners to make this all happen end to end, much like your customers. Sometimes we need to rely on expertise from other areas.

So, curious about how you’re working with partners like Intel, and what the value of that and their technology is. Kemal, I can start with you on that one.

Kemal Levi: In our implementations we are taking advantage of Intel processors and Intel hardware such as Intel® Movidius vision processing units, and we are also often relying on Intel software such as OpenVINO to optimize deep learning models for real-time inferencing at the edge.

Now in the case of quality automation or digitizing visual inspections, customers are very sensitive about computing hardware costs, and they really do care quite a bit about smart utilization of CPU, so we use the Intel OpenVINO toolkit to minimize the burden. And also as an Intel market-ready solution provider we have access to a large community of potential buyers of our product.

Christina Cardoza: Great. We always love hearing about OpenVINO. That is a big toolkit in the AI space, like you mentioned, taking some of the burden off of engineers and just being able to easily run it once and deploy it on many different hardwares. So it’s great to hear.

Stefano, I’m curious from iProd’s end, how are you guys working with partners like Intel, and what are the areas that their technology really helps the iProd solution be able to benefit customers?

Stefano Linari: At the moment we use widely Intel-embedded mobile processors, because even if we haven’t done a heavy workload on AI and ML, what our customers want is for sure to reduce energy consumption at the edge. You have to consider that each IoT tablet is installed on top of each production machine and in a harsh environment, so we need a fanless processor with high computing power and low consumption for standby.

We also use Intel connectivity for Wi-Fi, because we need connectivity that can be reliable in EMC, in difficult spaces where you have welding machines and robots with high power, and this is what we are using now. OpenVINO and new processor with the Ultra core—Ultra is also in our ladder. We are starting to experiment with these features to accelerate especially ML and AI models to predict the usage, because we combine in the tablet—I didn’t tell before—not only IoT data that comes in from the CMCs, but from the cloud we receive even a schedule of the next batch to produce.

And what we are trying to do is to forecast the production, because you have to combine how many working hours this model will do if I will win this deal. Most of the calculations have to be done on the edge, because the customer doesn’t want to move outside their company sensitive information. For the manufacturer for example that produces a piece for the aerospace industry or high-end machines—a supercar like Ferrari, just to name a brand—their technology that is inside your software of the CMT machines, it’s all, half of the value of your company, and you don’t want to transmit even to iProd this information; you want to process all the information on the edge.

Christina Cardoza: Yeah, absolutely. One thing I love about these processors and toolkits is that this technology, it seems to be advancing super fast every day. Some things that a month ago that we were interested in are now becoming reality, and manufacturers, sometimes they have trouble keeping up with all of the advances and getting all of the benefits. But with partners like Intel and these processors they’re really making new changes every day to ensure that we can continue to keep up with the pace of innovation.

I’m curious, Stefano, how else do you think this space is going to change? What do we have to look forward to for the future of the supply chain?

Stefano Linari: I agree with even what Kemal told before: what we see, it’s a digital continuum—from the machines to the OEM to the supplier of the OEM to create a continuum of information. Because we don’t want to spend time in the order process. This is the piece that is considered a loss of time, and Amazon and other online stores are driving the user experience. Because now B2B requests inspire and are driven by B2C experience in the day-by-day life.

The second main point that is pushing the digitalization and will became mandatory in the next few years, at least in Europe, will be the ESG regulation and the so-called Supply Chain Act. So a company in 2026 has to present the ESG report, so they have to account for the emissions that each process in the company generates, and the main focus is on the manufacturing side obviously. And with the Supply Chain Act you have to provide this information, not only through the ESG report to the public, but you have to share points and data with your customers in real time or near real time. This means that the supply chain must be heavily integrated in the next few years.

Christina Cardoza: Great point. And you mentioned sustainability earlier, where we were talking about how some of these things can help worker safety. There are so many different areas that we can talk about, and we’ve only scratched the surface in this conversation.

Unfortunately we are running out of time. So, before we go, Kemal, I just want to throw it back to you one last time. If there’s any final thoughts or key takeaways you want to add—what we can expect from the future of supply chain management? Or how else AI is going to continue to evolve in this space?

Kemal Levi: Well, I think, as I said before, there will be a lot of focus on real-time data analytics and correlating acquired data across the product life cycle. And this goes all the way from manufacturing to sales, to service, to overall enable continuous business intelligence and help to derive better supply chain decisions.

And I think looking to the future companies will strengthen demand planning and inventory management in tandem with their suppliers. There will be data visibility at all levels, whether it’s from in-house manufacturing, suppliers and logistic partners, or customers and distribution centers. The supply chain will no longer be driven by uncertainty in demand and execution capabilities, and overall it will be characterized by continuous collaboration and flow of information.

Christina Cardoza: Well, I can’t wait to see how that all starts to shape out over the next couple of years, and how Relimetrics and iProd, how the advancements and innovations you guys continue to make in this space. So I invite all of our listeners to visit iProd and Relimetrics’ websites. See how they can help you digitize the supply chain from end to end and really get that continuum of information in all aspects of your business.

And also visit insight.tech, where we will continue to keep up with iProd and Relimetrics and highlight the innovations that are happening in this space. Until next time, this has been “insight.tech Talk.” Thanks for joining us.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Edge AI Paves the Way to Seaport Management

With the majority of international trade goods shipped by sea, ports are vital engines of business and economic growth. But as populations surge, emerging economies develop, and the volume of global trade increases, seaports face serious challenges.

“Port authorities today are struggling to manage vehicle traffic in and around ports, leading to inefficiency and delays,” says Sim Tiong Yan, Business Development Manager at Gamma Solution SDN BHD, a provider of smart city solutions. “Worker safety and port security are also major concerns.”

Ironically, the most significant port traffic challenges don’t involve ships but rather land vehicles that transport cargo. That may come as a surprise, but there are several reasons why ground traffic is so problematic in port areas.

Every truck arriving at a port must first check in with the port authority. The registration process is usually manual and can be quite slow—resulting in long lines of vehicles waiting to check in and creating traffic bottlenecks. In addition, drivers sometimes disobey port traffic regulations: stopping in no-parking zones, speeding, driving the wrong direction on a one-way route, or staying longer than their allotted time. This can interfere with operations and further slow the flow of traffic into and out of the port.

Further, the ongoing issue of port backups has caused environmental concerns, making it imperative to come up with innovative port management solutions.

#SmartCity solutions based on #EdgeAI and #ComputerVision help manage port traffic more effectively while also improving port safety and security. Gamma Solution SDN BHD via @insightdottech

Port Traffic Management Challenges and Solutions

The good news is that smart city solutions based on edge AI and computer vision help manage port traffic more effectively while also improving port safety and security. Built on flexible, modular edge hardware, these solutions can be deployed to ports all around the world and customized to suit local needs.

The Gamma TITANUS EYEoT solution, for example, employs optical character recognition (OCR) to streamline vehicle check-in by automatically registering each vehicle’s license plate at entry and capturing cargo container codes that truck drivers will need. Computer vision helps detect illegal parking, traffic violations, and measures the total time each vehicle has spent in the port. If a problem is detected, an official receives an alert so they can take corrective action.

Edge AI Offers Safety, Security, and Equipment Monitoring

Gamma’s solution helps solve key safety challenges facing port managers, such as detecting hard hats and reflective vests—helping ensure that workers comply with proper procedures. The system’s AI object recognition algorithm can also differentiate between humans and vehicles, able to send warnings to port operators if a person wanders into a vehicle-only zone—or if a truck goes through a pedestrian area.

In addition, the system contains equipment-monitoring capabilities for sensitive and potentially hazardous machinery. For example, ports often house chemical facilities, where tanks are carefully monitored to ensure that they do not exceed the safe temperature range, as an overheated tank could result in a fire or explosion. The TITANUS system uses thermal cameras and AI analytics to measure tank temperature, alerting a safety officer if danger is detected.

Combining cameras and AI also delivers more effective port security. The Gamma computer vision-based intrusion detection module can identify an unauthorized person trying to sneak into the port—but won’t create a false alarm if an object lands on the perimeter fencing. Biometric technology enables tiered access to sub-areas within the port. For example, an IT technician might be granted access to office areas, but not industrial zones.

ASEAN Case Study Highlights Potential for Customization

A good example of smart city solutions comes from Gamma’s custom deployment at a port in South Asia. A port operator had several safety and efficiency problems they wanted to solve. Gamma’s engineers proposed three possible implementation approaches:

  • Run the system on edge AI boxes and AI cameras, with all processing and automation performed right at the edge.
  • Connect standard IP cameras to a back-end server, with AI analysis and decision-making handled on the server.
  • Adopt a hybrid approach, using IP cameras with an edge AI box to perform some of the AI analytics workload at the edge while determining automated response actions via the back-end server.

In the end, the hybrid option was selected to provide the best balance of cost and performance. Port operators saw a significant improvement in traffic flow at the vehicle check-in counter. They also resolved the longstanding safety issue of dock workers repeatedly entering a potentially hazardous area. In the year prior to implementing the solution, the port had experienced more than 50 cases of worker violations by entering the restricted area. After the solution implementation, the number of incidents has fallen to zero.

Gamma’s technology partnership with Intel helped bring the solution to market—and made it easier to offer flexible deployment options. “Intel engineers helped us to optimize our AI models and offered benchmarking tools that allowed us to select the exact hardware specifications we needed for our deployment,” says Yan. “The benchmarking support on hardware performance has been a real help in winning over customers, because we can enable them to control costs and build tailor-made solutions based on their needs.”

Smarter, More Sustainable Cities

The world’s environmental and shipping challenges will become more critical in the years ahead. Scalable, customizable solutions that improve efficiency at ports will likely be of great interest to port authorities, city managers, and systems integrators (SIs).

The flexibility of these solutions holds another benefit for governments and SIs, because they are based on technologies that can easily be repurposed for other smart city use cases.

“There’s plenty of overlap between a smart port management solution and use cases in smart cities, manufacturing, and logistics,” says Yan. “These systems can also be used to ensure security at warehouses, improve worker safety in factories, or manage traffic flow in communities—making our cities smarter, safer, and more sustainable.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

AI Advances Convergence of Cyber and Physical Security

The advancement of AI technology is driving a transformational shift and impacting every industry, including the security industry. As we navigate the changes and opportunities, our approach and practices will need to change with them. To help us navigate this new world, we talk to Kasia Hanson, Global Sr. Director, Physical and Cybersecurity Ecosystems at Intel.

Hanson is an influencer on the forefront of the global security industry. In 2024 she was included for the third time by the Security Industry Association on the Women in Security Forum “Power 100” List for advancing diversity, inclusion, innovation, and leadership in the community. Her work at Intel is all about helping the ecosystem grow, advance, and leverage the latest security technologies by creating an advanced portfolio of solutions with Intel’s ecosystem of partners. Kasia also advises integrators and security practitioners on converged practices and AI in security. We talk about the changing dynamics in the world of physical security, including the convergence of physical and cybersecurity, and Intel’s role in helping customers and partners overcome the challenges today and capitalize on opportunities in the future.

Let’s start out by talking about the convergence of physical security and cybersecurity.

As the threat landscape continues to grow, AI is a tool for security teams to detect, respond, and mitigate threats, but the bad actors are also using AI to perpetrate attacks. As AI permeates all aspects of our world, threats continue to get more and more sophisticated, and we must protect both physical and digital assets.

The broad adoption of the Internet of Things (IoT) and the Industrial Internet of Things (IIOT) has created an interconnected ecosystem of physical and cyber systems, blurring the lines between physical and cyber. Security threats are evolving and going lower in the stack. They are aimed at physical vulnerabilities. As physical security and cybersecurity are increasingly interrelated, it’s no longer viable to separate cybersecurity and physical security policies and practices.

With the landscape and threats evolving quickly, we aim to arm security practitioners with tools to create layers of defense—whether it’s integrated silicon security and product assurance or advising on holistic security practices and solutions. Our goal is to help the defenders defend.

What big security challenges do organizations face?

Security organizations are faced with many challenges from ransomware, insider threats, malware and viruses, supply chain attacks, data breaches, unauthorized attacks and intrusions, physical sabotage, tailgating and social engineering, facility breaches, device tampering, and environment (fire, weather). Both the CSO and CISO are charged with protecting all facets of their organizations, so formal collaboration between the physical and cybersecurity teams is critical to improve efficiency and resiliency and achieve a greater return on their security investments.

As new AI and computer vision technologies are developed and deployed to combat security threats, how are organizations complying with privacy regulations and laws?

There are a couple of areas to this. The first is the ethical development of AI. This should be the number-one priority in the development and use of AI in any scenario. We all play a role to ensure that AI is being developed in an ethical and equitable way with trustworthy systems. I invite you to read more about Intel’s responsible AI policies and approach.

To help security practitioners protect data and privacy, Intel builds security features into our hardware and software, so data can be protected and compliant with privacy laws such as GDPR in Europe and industry-specific regulatory requirements like healthcare and financial services. Confidential computing can help practitioners protect data and stay compliant with regulatory requirements. For example, Intel® Software Guard Extensions (Intel® SGX) unlocks new opportunities for business collaboration and insights—even with sensitive or regulated data. Intel SGX is the most researched and updated confidential computing technology in data centers on the market today, and with the smallest trust boundary.

And Intel® Trust Domain Extensions (Intel® TDX), helps to increase confidentiality at the VM level, enhance privacy, and gain control over your data. It enables isolation of the guest OS and VM applications, which removes access from the cloud host, hypervisor, and other VMs on the platform.

What are some examples of the types of partners you work with?

Intel has an extensive ecosystem, including ODM, OEM, ISV, and systems integrator partners delivering innovative solutions that help security practitioners add layers of defense and deliver new business value. We work with the ecosystem to bring innovative software capabilities that can leverage both hardware and software and provide new outcomes in a more secure way.

Software has created an opportunity for the market to offer more advanced business outcomes, lower total cost of ownership, and accelerate time to market. We work with ISVs to help them develop AI and computer vison capabilities using the Intel® OpenVINO toolkit and the Intel® Geti platform model training at the edge. Then there’s Intel® SceneScape, a new software tool enabling vision-based AI to have spatial awareness from sensor data and provide live updates to a 4D digital twin of your physical space.

The security ecosystem serves many different verticals, and we work with the ecosystem to deliver optimized solutions that serve markets such as retail, manufacturing, education, and healthcare. Genetec, for example, serves education, cities, government, entertainment venues, and commercial businesses. Its Genetec Security Center is an open-architecture platform that unifies security systems, sensors, and data into a single interface. This includes IP-based video, access control, automatic license plate recognition (ALPR), intrusion detection, intercoms, and more. We work closely with them to optimize their software and hardware with Intel technology, accelerating new business outcomes for security practitioners.

Another partner we work with is Axis Communications, one of the leading camera vendors in the world. We can leverage their cameras with Intel SceneScape for scene intelligence and move beyond traditional vision-based AI. This leads to realizing spatial awareness from sensor data and into a 4D digital twin—creating new opportunities for security practitioners. We also work with AI ISVs like EPIC iO, which delivers advanced analytics use cases. We’ve helped them optimize their software capabilities with OpenVINO as well as validated the company’s solutions on Intel-based Dell hardware. Working hand in hand with them enables us to deliver new business outcomes at the edge using advanced capabilities.

We also work with the cybersecurity ecosystem to develop solutions on Intel platforms with software optimization. Check out the latest Cybersecurity Ecosystem Catalog to see how we are working with partners like CrowdStrike to protect endpoints leveraging Intel Threat Detection Technology.

In closing, is there anything else you would like to add?

The cyber and physical security landscape is changing faster than ever. When I advise our partner ecosystem on AI and security technologies, I always reference being on a journey together. Intel is uniquely positioned to lead the technology industry in a security evolution due to our vast product portfolio and end-to-end ownership in product development. We believe that system trust is rooted in security—if hardware isn’t secure, then a system cannot be secure. That’s why our goal is to build the most secure hardware on the planet, enabled by software—and we’ve made unparalleled investments in people, processes, and products to meet this goal.

According to ABI Research, Intel leads the silicon industry in product security assurance. I invite anyone making security product decisions to review the latest ABI Research report: Embracing Security as a Core Component of the Tech You Buy and the Intel 2023 Product Security Report.

Additional resources:

Intel’s Cybersecurity Ecosystem Partners

Physical and Cyber Convergence in the latest eBook from Intel and Credo Cyber Consulting

The key role AI and other technologies play in both physical and cybersecurity in Kasia’s article published in the Influencers Edition of the Security Journal Americas.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech

Fair and Transparent Assessments with AI Proctoring

A traditional classroom exam requires supervision from an educator or test proctor to ensure integrity. For online education, remote-proctoring software fills that need. It works by recording test-takers via webcam and using remote human proctors or an AI algorithm to monitor their activity. Secure, remote proctoring provides the watchful eyes needed to maintain academic integrity for both educators and students—even when they’re not in the same room.

Online assessments are particularly vulnerable to security issues and academic dishonesty. To combat these challenges, educators need digital tools that enable virtual proctoring to help evaluate students’ learning outcomes while maintaining integrity. Online proctoring can use AI, software, a live human proctor, or any combination.

“Proctoring ensures that assessments are conducted in a fair and transparent manner,” says Deepak MK, Vice President Data Science at ExamRoom.AI, an AI EdTech company. “By monitoring test-takers, proctoring helps uphold the integrity of educational and professional credentials.”

The AI Proctoring Process

ExamRoom.AI provides schools and organizations with a comprehensive platform streamlined and highly secure to proctor exams around the world. Educators can deliver assessments and keep track of student outcomes through a learning management system (LMS) and web-based platform for proctoring.

Test-takers login and participate via webcam while a human proctor takes them through an identification verification process. The platform restricts test-takers from tampering with webcams, copying and pasting text, and screen sharing. “Beyond those basics, we’ve developed secure algorithms to control hardware and software kernels, along with biometric monitoring such as fingerprinting, facial scanning, and voice recognition to further protect against cheating,” MK says.

Because many schools and enterprises use other edtech tools, the platform integrates with popular LMS platforms, including Blackboard and Canvas. ExamRoom.AI also works with customers to customize APIs and the platform’s user interface to white-label for customer branding—for example, adjusting details such as font sizes and logos.

Individual Privacy: A Platform Fundamental

Personal privacy is, of course, a significant concern for both students and educational institutions. ExamRoom.AI adheres to relevant data protection regulations, such as GDPR, COPA, FERPA, ISO27001, ISO 9001, and SOC II, depending on the jurisdiction and the nature of the data being processed. Alongside regulatory compliance, the platform protects individuals’ information in several ways:

  • All data transmitted through ExamRoom.AI is encrypted to ensure sensitive information remains secure during transmission.
  • Users are informed about data collection practices and provide explicit consent before any data is collected or processed.
  • Wherever possible, personal data is anonymized to prevent direct identification of individuals.
  • Strict controls limit access to personal data only to authorized personnel who require it for valid purposes.
  • The platform collects and processes only the minimum amount of personal data necessary to provide its services.
  • As an ISO-certified company, it undergoes regular audits and assessments to identify and address any potential vulnerabilities or compliance issues related to data privacy.
  • Users are provided with clear information about how their data is being used, including purposes, recipients, and retention periods, promoting transparency and trust.

Enabling Accessibility and Personalized Learning

One of the biggest challenges in today’s education climate is adapting to changing technologies and methodologies while ensuring that materials are accessible to all learners. ExamRoom.AI addresses this hurdle by providing a user-friendly experience for students and enabling educators to deliver multimodal content and adaptive learning paths. For example, an online assessment might give different questions to different students, depending on how they answered the previous question. “Our tools help educators accommodate different learning styles and abilities,” MK says. “They can also use the system to gather data that helps overcome learning gaps.”

“We empower #educators to tailor their #teaching approaches to foster critical thinking, problem-solving, creativity, and collaboration—the skills essential for success in the 21st century” – Deepak MK, @examroomai via @insightdottech

Students, educators, and businesses are under pressure to prepare students and employees for a rapidly changing job market as technology plays a dominant role in society. “ExamRoom.AI has tools for skills assessment, career guidance, and professional development,” MK says. “We empower educators to tailor their teaching approaches to foster critical thinking, problem-solving, creativity, and collaboration—the skills essential for success in the 21st century.”

Building for the Future

Intel technology plays a crucial role in the ExamRoom.AI solution, including state-of-the-art GPU hardware that enhances the speed and efficiency of the AI model training and inference processes. The company collaborates with Intel to optimize various machine learning models, including those for object detection, semantic search, tag generation, and translation. The partnership with Intel has been instrumental in fine-tuning these models to improve performance, efficiency, and accuracy.

As education continues to embrace AI-powered tools such as tailored feedback and adaptive learning paths, MK looks forward to continuing the collaboration with Intel: “Virtual proctoring and remote assessment solutions will keep evolving to ensure integrity in online testing environments. Supported by Intel, we’ll continue to retrain and refine our AI models with extensive data sets to make sure they stay effective and relevant.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

AI Everywhere—From the Network Edge to the Cloud

At a recent launch event, Intel CEO Pat Gelsinger introduced not just new products but the concept of “AI Everywhere”. In presenting the 5th Gen Intel® Xeon® processors and Intel® Core Ultra processors, Gelsinger talked about how Intel is working to bring AI workloads to the data center, the cloud, and the edge.

Now, in a conversation with Gary Gumanow, Sales Enablement Manager – North American Channel for Intel® Ethernet Products, we learn more about the idea of AI Everywhere and the role of the network edge. Gary has spent his career in networking, which may be why he’s also known as “Gary Gigabit.” With a background in systems integration at some of the top law firms in New York City, Gary works closely with Intel distributors and solution providers. Gary says understanding the technology, customer needs, and how products get moved through the channel are near and dear to his heart.

When Intel talks about AI Everywhere—from the data center to the edge device, what does that mean in terms of the network edge?

AI Everywhere means from the edge to the network core to the data center. By the edge, we’re talking about the endpoints: sensors, cameras, servers, PCs, adapters—the devices that connect to the network. And the core refers to the components that provide services to the edge. AI in the data center is nothing new and has the power and storage to handle big AI loads. But inferencing at the edge is new. And there are a number of challenges from processing power in compact/rugged PCs to the time-sensitive networks and connectivity needed to transport data back and forth.

And there are several areas that impact the network and how the network is important to those areas. What is AI going to mean to an edge device? The AI model is only as good as the data that can get to it, but how does that data get to an edge device and vice versa, and how does that data get back to the data center?

It’s important that you’re putting the optimal amount of smarts there—right-sizing the architecture so as not to burden the network between the data center. This means running AI Everywhere with the right CPUs while lowering the cost while increasing performance.

We’re continually working on improving bandwidth, improving data security, and confidential computing in our network devices, so that when they go down to the edge, they’re secure, they have low latency, they have the performance that’s required to connect the data center with the edge. And doing it in a way that’s low power and sustainable in terms of its price performance per watt and optimizing the power.

Let’s expand this idea to the factory, where we’ve got AI and computer vision—taking all of this data and inferencing it at the edge. What does the network edge look like here?

Believe it or not, some factory floors are so large they can have their own weather patterns. And one of the things that’s really hot right now for manufacturing and automation is going the distance between robotic devices. So how can these devices communicate when they are football fields apart from each other? And how do you get real-time data out to those edge devices, which are important to the assembly line?

This is a reason why manufacturers are deploying private 5G networks in factories—so that they can communicate from a local server or from a data center, all the way out to these endpoints. But this type of communications takes timing accuracy, low latency, and performance.

So, one cornerstone to 5G virtualized radio access networks (vRANs) is precision timing technology. And global positioning satellite (GPS) devices are key components of a precision timing network. Essentially networks have an atomic clock, which is typically a network appliance, and you have all of your devices synchronized with that appliance. But that’s expensive and proprietary.

The other thing that’s important for 5G is forward error correction (FEC) that is looking forward in the flow and correcting for any errors, so that you’re heading any errors off at the pass—you’ve got the precision timing and you’ve got the forward error correction. All of this can get complicated.

How is Intel making it less complicated to deploy private 5G in factories as one example?

We’ve built these functions directly into our Ethernet products. For example, take the atomic clock technology that’s been appliance-based, and is now integrated into some of our network adapters. You can eliminate those appliances in the network and have the timing accuracy that’s required for 5G networks built in. It saves power, it saves money, and it simplifies the network design because you don’t have to have all of these devices coming back to an atomic clock. It can be out on the nodes where it needs to be. GPS timing synchronization and FEC are other technologies built into our network adapters and devices as well.

We have this progression of shrinking the requirements of discrete components down to a smaller set of things. So now we have Intel® vRAN Boost doing a lot of the work via an accelerator on the 4th Gen Intel® Xeon® processors. This is fully integrated, high-capacity acceleration with vRAN Boost that increases the performance and the calculations that are required to run Ethernet over vRAN. And again, this reduces component requirements, power consumption, and overall system complexity.

It’s like the progression of everything at Intel. It’s consolidating it into the processor or to a smaller number of components and simplifying it and making it easier to deploy. Another example is how Ethernet is finding itself embedded in Intel Xeon D processors. The system-on-chip (SoC) processors have the logic of an Ethernet controller to support 100 gigabits in the actual chip.

It’s sized for a network appliance or edge device versus the cloud data center so it has fewer cores and requires less power. And it’s specialized to handle network flows and network security. The Intel Xeon D processer is “right sized” for where it should be sold and where it should be embedded. You can deploy it in medical sensors, gateways, industrial PCs, the factory floor—all where you need near real-time actionable insights.

Is there anything you would like to add in closing?

We feel very strongly about interoperability with multiple vendors. In fact, in the AI space, we’re doing something called HPN or high-performance networking stacks based on open APIs and open software. We’re working with multiple vendors like Broadcom, Arista, Cisco, and a whole bunch of other ones. There’s the Ultra Ethernet Consortium open to organizations that want to participate in an open ecosystem and support AI in the data center.

My customers are telling me that they like the openness approach that Intel is taking with the industry. This consortium that’s coming about to bring data center Ethernet in an open environment is critical for the industry, for AI to really extend out as far as it can go.

Clearly Ethernet has stood the test of time because its five principles: backwards compatibility, insatiable need for bandwidth, interoperability, open software, and evolving use cases. The network—whether it’s 802.11, Gigabit Ethernet, or 100 Gigabit Ethernet—it’s the fabric that alongside 5G puts this whole story together to bring AI Everywhere—from edge to cloud.

 

Edited by Christina Cardoza, Associate Editorial Director for insight.tech.