Reverse Proxy Server Advances AI Cybersecurity

AI models rely on constant streams of data to learn and make inferences. That’s what makes them valuable. It’s also what makes them vulnerable. Because AI models are built on data they are exposed to, they are also susceptible to data that has been corrupted, manipulated, or compromised.

Cyberthreats can come from bad actors that fabricate inferences and inject bias into models to disrupt their performance or operation. The same outcome can be produced by Distributed Denial of Service (DDoS) attacks that overwhelm the platforms that models run on (as well as the model itself). These and other threats can subject models and their sensitive data to IP theft, especially if the surrounding infrastructure is not properly secured.

Unfortunately, the rush to implement AI models has resulted in significant security gaps in AI deployment architectures. As companies integrate AI with more business systems and processes, chief information security officers (CISOs) must work to close these gaps and prevent valuable data and IP from being extracted with every inference.

AI Cybersecurity Dilemma for Performance-Seeking CISOs

On a technical level, there is a simple explanation for lack of security in current-generation AI deployments: performance.

AI model computation is a resource-intensive task and, until very recently, was almost exclusively the domain of compute clusters and super computers. That’s no longer the case, with platforms like the octal-core 4th Gen Intel® Xeon® Scalable Processors that power rack servers like the Dell Technologies PowerEdge R760, which is more than capable of efficiently hosting multiple AI model servers simultaneously (Figure 1).

Picture of Dell rack server
Figure 1. Rack servers like the Dell PowerEdge R760 can host multiple high-performance Intel® OpenVINO toolkit model servers simultaneously. (Source: Dell Technologies)

But whether hosted at the edge or in a data center, AI model servers require most if not all of a platform’s resources. This comes at the expense of functions like security, which is also computationally demanding, almost regardless of the deployment paradigm:

  • Deployment Model 1—Host Processor: Deploying both AI model servers and security like firewalls or encryption/decryption on the same processor pits the workloads in a competition for CPU resources, network bandwidth, and memory. This slows response times, increases latency, and degrades performance.
  • Deployment Model 2—Separate Virtual Machines (VMs): Hosting AI models and security in different VMs on the same host processor can introduce unnecessary overhead, architectural complexity, and ultimately impact system scalability and agility.
  • Deployment Model 3—Same VM: With both workload types hosted in the same VM, model servers and security functions can be exposed to the same vulnerabilities. This can exacerbate data breaches, unauthorized access, and service disruptions.

CISOs need new deployment architectures that provide both performance scalability that AI models need as well as ability to protect sensitive data and IP residing within them.

Proxy for AI Model Security on COTS Hardware

An alternative would be to host AI model servers and security workloads on different systems altogether. This provides sufficient resources to avoid unwanted latency or performance degradation in AI tasks while also offering physical separation between inferences, security operations, and the AI models themselves.

The challenge then becomes physical footprint and cost.

Building on a Dell PowerEdge R760 Rack Server featuring a 4th Gen Intel Xeon Scalable Processor, F5 integrated an Intel® Infrastructure Processing Unit (Intel® IPU) Adapter E2100. @F5 via @insightdottech Recognizing the opportunity, F5 Networks, Inc., a global leader in application delivery infrastructure, partnered with Intel and Dell, a leading global OEM company that provides an extensive product portfolio, to develop a solution that addresses the requirements above in a single, commercial-off-the-shelf (COTS) system. Building on a Dell PowerEdge R760 Rack Server featuring a 4th Gen Intel Xeon Scalable Processor, F5 integrated an Intel® Infrastructure Processing Unit (Intel® IPU) Adapter E2100 (Figure 2).

Image of Intel IPU adapter
Figure 2. The Intel® Infrastructure Processing Unit (Intel® IPU) Adapter E2100 offloads security operations from a host processor, freeing resources for other workloads like AI training and inferencing. (Source: Intel)

The Intel IPU Adapter E2100 is an infrastructure acceleration card that delivers 200 GbE bandwidth, x16 PCIe 4.0 lanes, and built-in cryptographic accelerators that combine with an advanced packet processing pipeline to deliver line-rate security. The card’s standard interfaces allow native integration with servers like the PowerEdge R760, and the IPU equips ample compute and memory to host a reverse proxy server like F5’s NGINIX Plus.

NGINX Plus, built on an open-source web server, can be deployed as a reverse proxy server to intercept and decrypt/encrypt traffic going to and from a destination server. This separation helps mitigate DDoS attacks but also means cryptographic operations can take place somewhere other than the AI model server host.

The F5 Networks NGINX Plus reverse proxy server provides SSL/TLS encryption as well as a security air gap between unauthenticated inferences and Intel® OpenVINO toolkit model servers running on the R760. In addition to operating as a reverse proxy server, NGINX Plus provides enterprise-grade features such as security controls, load balancing, content caching, application monitoring and management, and more.

Streamline AI Model Security. Focus on AI Value.

For all the enthusiasm around AI, there hasn’t been much thought given to potential deployment drawbacks. Any company looking to gain a competitive edge must rapidly integrate and deploy AI solutions in its tech stack. But to avoid buyer’s remorse, it must also be aware of security risks that come with AI adoption.

Running security services on a dedicated IPU not only streamlines deployment of secure AI but also enhances DevSecOps pipelines by creating a distinct separation between AI and security development teams.

Maybe we won’t spend too much time worrying about AI security after all.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

The Network Edge Advantage: Achieving Business Success

In today’s rapidly evolving technology landscape, businesses increasingly turn to network edge solutions to meet the demands of real-time data processing, enhanced security, and improved user experiences. But deploying these solutions comes with its own set of challenges, including latency issues, bandwidth constraints, and need for robust infrastructure.

This podcast episode explores the world of network edge computing, and the unique challenges businesses face when deploying these advanced solutions. We discuss the critical features of network edge devices and how AI can help drive efficiency. Additionally, we examine the specific challenges and demands industries encounter and how they can overcome them.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guest: CASwell

Our guest this episode is CK Chou, Product Manager at CASwell, a leading hardware manufacturer for IoT, network, and security apps. CK joined CASwell in 2014 and has since worked to build strong customer relationships by ensuring that CASwell’s solutions meet specific needs and standards.

Podcast Topics

CK answers our questions about:

  • 2:42 – The move to the network edge
  • 6:17 – Network edge devices built for success
  • 11:15 – Moving to AI at the network edge
  • 14:37 – Addressing network edge challenges
  • 17:30 – Overcoming the increased demand
  • 22:37 – Implementing network edge devices
  • 25:32 – Partnering on performance and power

Related Content

To learn more about the network edge, read AI Everywhere—From the Network Edge to the Cloud. For the latest innovations from CASwell, follow them on LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to “insight.tech Talk,” where we explore the latest IoT, AI, edge, and network technology trends and innovations. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re taking on the conversation of the network edge with CK from CASwell. But before we get started, let’s get to know our guest. CK, what can you tell us about yourself and what you do at CASwell?

CK Chou: Hi, Christina; hi, everyone. My name is CK, with over 10 years of experience in CASwell for product management. My main focus has been on serving customers in Europe and the Middle East. Over the years my mission is to build strong relationships with clients across these regions, ensuring that the solutions from CASwell meet their specific needs and standards.

And about CASwell—originally began as a division dedicated to network-security applications. Over time our expertise and focus grew, leading us to branch out and establish ourselves as a standalone company in 2007. Over the years CASwell has placed a strong emphasis on R&D to stay at the forefront of technology and innovation. However, we were not satisfied as only a player for networking, so expanded our business into information and operation applications. I should say that our journey from a small division to an independent company wasn’t just about getting bigger; it was about getting better at what we do.

Nowadays, CASwell is a leading hardware solution provider for IT and OT industry in Taiwan, specializing in design, engineering, and manufacturing of not only networking appliance but also industrial equipment, edge computing device, and advanced edge-AI solutions which can meet the demand for the current, modern applications.

Christina Cardoza: Great, and I’m looking forward to digging into some of that hardware. But before we jump into that, I want to start the conversation trying to understand a little bit more of why companies are moving to the network edge. I like how you said in your introduction: you’re trying to stay at the forefront of technology and innovation and get better at what you do. And I think a lot of businesses are trying to do the same, and they look to CASwell to help them along that journey. But why are they moving to the network edge today, and what challenges are they facing on their journey?

CK Chou: If we are talking about the edge computing, we all know that it is all about handling data right where it is created instead of sending everything to the central server. This means faster response and less internal traffic, which means it is perfect for things that need instant reactions like manufacturing, retail, transportation, financial services, and etcetera.

Let me say it in this way. Imagine you are in a self-driving car and something unexpected happens on the road. You need your car to react instantly because every millisecond counts, okay? You cannot afford a delay waiting for data to travel to a distant server and back. It’s not like waiting for a loading sandbox when you’re using your computer, right? In self-driving scenarios any delays could mean life or death. This is just an example where edge computing comes in, handling data right at the source to make those split-second decisions.

And of course it’s not just about the speed; it’s also about keeping your information safe. If sensitive data like your financial information can be processed locally instead of being sent over the internet to the central server, there’s a lower chance of it being intercepted or hacked. The less your data travels around, the safer it stays.

This kind of localized processing is also super important in other areas like health care—which needs instant diagnostic results—machines in a factory detecting problems. By processing data on the spot, edge computing help keep everything running smoothly, even in places where internet connections might be unreliable. So, in short, edge computing is all about speed, security, and reliability. It brings the power of data processing closer to where it’s needed most—whether it’s in your car or your doctor’s office or on the factory floor.

But from what I hear from some of our customers, moving to the network edge is not always easy. It’s a big step and comes with its own set of challenges. Companies face things like increased complexity in managing systems, higher infrastructure cost, limited processing power, data-management issues, and more. Despite these challenges, the benefits of edge computing are too significant to ignore. It can really boost the infrastructure performance, improve security, and save the overall cost, and eventually making it worth the effort to overcome all those hurdles.

Christina Cardoza: Yeah, absolutely. I can definitely see the need for network edge and edge computing with all the demands of the real-time data processing, like you mentioned—the enhanced security, improving user experiences.

But I feel like a lot of times when we discuss the edge it feels very abstract. We know all of the benefits and why we should be moving there, but how do we move there? Is there a network-edge device, for instance, that is able to help us move to the edge and get all of these benefits? What does that look like?

CK Chou: Challenges that I mentioned earlier make moving to the edge seem expensive and complicated, but if companies can have reliable edge devices integrated, it can provide innovative, dependable, and affordable hardware features to help the companies to overcome these challenges so they can allocate their limited resources and focus more on building and managing their infrastructure, maintaining their data, and improving the security, or training their staff.

That’s why companies need to work closely with the edge-device provider, like CASwell. Our customers can always count on us because we design the right equipment for the right use case and ensure the edge devices are the key for their edge journey and make their transition to the edge smoother and easier. So, at the end of the day, having the right device with the right features are essential, but it’s only with the right partner—like CASwell. We support them from the hardware perspective, allowing companies to focus more on their specialization. Each party plays its own role, enabling companies to truly do more with this in their edge journey.

Christina Cardoza: I know you mentioned obviously it’s important to have the right features and reliable, affordable hardware, and that helps you build and manage infrastructure and maintain that data that’s really important. But can you talk a little bit more about what those features and hardware capabilities look like? When companies are looking for a network-edge device, what type of capabilities are really going to bring them success?

CK Chou: Okay, it is a tricky question for me. If I’m talking about my dream edge device, it needs to be small and compact, also packed with multiple connection options like SNA, Wi-Fi, and 5G for different applications. And it would also be nice to have a rack design that can operate in a harsh environment and handle the right range of temperature if users want to install the equipment in stony cold mountains or hot deserts. Also, offer powerful processing but consume low power. And, of course, the most important thing is the cost for this all-in-one box needs to be extremely low.

Getting all that in one device sounds perfect, right? But do you really think that would even be possible? Okay, I can tell you the truth is, companies at the edge don’t really need an all-in-one box. What they really need is a device with the right features for their specific environment and application, and that’s what CASwell is all about.

We have a product line which can provide a variety of choices, from the basic models to high-end solutions and from IT to OT applications. Whether it’s for a small office, a factory, or a remote location, we have got options designed for different conditions and requirements. So, with the right partner, companies can easily find the right edge device without paying for features they don’t really need.

Moving to the edge computing certainly costs a lot, so we need to do it smart and efficient. The idea is to ensure that every edge player can get exactly what they need to optimize their operations and stay ahead of this game. So, sorry that there’s no certain answer for your question here. In my opinion, if an edge device can offer the right features, right capabilities with an affordable cost for the specific use case, then it’s just a good edge device that we are looking for.

Christina Cardoza: Yeah, absolutely. No, I love that businesses or companies, they don’t necessarily need an all-in-one box. I think so many times the businesses are focused on finding something that is cost effective that tries to meet all their needs, and they sort of lose sight of what their needs actually are and how a device can help them and the benefits in the long run. So, that’s definitely great, and I want to get into how partnerships work with CASwell, as well as the different product lines that you do have a little bit deeper.

But before we get there I’m a little curious, because obviously when we talk about edge today, AI is so closely related to it. AI at the edge is a term that’s going around these days, and so I’m curious what the role here is at the network edge, especially when we’re talking about network-edge devices.

CK Chou: We know that nowadays AI-model training is done in the cloud due to its need for massive amounts of data and high computational power. If you do a quick search online, you’ll find lots of pictures showing how an AI factory or AI data center need to be. Imagine something the size of a football field and filled with dozens of big blocks, and each block is packed with hundreds of servers, all linked together working nonstop on model training.

I agree that such an AI server sounds amazing, but this is too far from our general use case and not is able to be afforded by our customers. As we talked about earlier, the concept of edge computing is all about handling data right where it is created instead of sending everything to a central server. So, if we want to use AI to enhance our edge solutions, we cannot just move the entire AI factory to our server room, unless you are super rich and your server room is the size of a football field.

Instead, we keep the heavy-duty, deep learning tasks in a centralized AI center and shift the inference part to the edge. This approach requires much less power and data, making it perfect for edge equipment. We’re already seeing this trend with AI integrated into our everyday devices, like mobile phones and AI-enabled PCs. These device use cloud-trained models to make smart decisions and provide our personalized experiences and enhance user interaction.

Building on this trend, edge-AI servers are coming into the picture of CASwell by integrating with the general computability; we often use a GPU engine here. This edge server can handle the basic AI calculation on top of our existing hardware. This means faster decision-making and the ability to use AI-driven insights in real time, whether it’s for cybersecurity, small factories, or other edge applications.

CASwell is now building a new product line for edge-AI servers designed to bring AI capabilities right from the data center to the edge, giving us the power of AI instantly, and it puts AI directly in the hands of those who need it and right when they need it.

Christina Cardoza: So, tell me a little bit more about that product line or the other products that CASwell offers. You mentioned that you have a whole suite of tools to help businesses depending on what their needs are, their demands, and what they’re trying to get. So, how is CASwell helping these businesses address their network-edge challenges and demands?

CK Chou: I can introduce a model, CAF-0121. The CAF-0121 is an interesting entry-level desktop product from CASwell, built around Intel’s new generation Atom® processor, which offers a great balance of performance and power efficiency. This small box also can provide 2.5 gig support to fulfill the basic infrastructure connectivity, plus its compact and fanless passive-cooling design, which is suitable for edge computing applications.

But we can see a trend where the edge environments are becoming more challenging than we initially expected. End users want to install edge equipment not just in the office space with air conditioning or on clean, organized racks, but also in OT environments like a warehouse, factory floors, and even cabinets without proper airflow. The line between IT and OT is becoming more broad, and more users are looking for solutions that can work in both IT and light OT environments.

As a compromise, CASwell decided to develop this CAF-0121 that can handle a wider temperature range from the typical 0º–40º up to something like -20º–60º. Our goal with this new model is to provide OT-grade specs at an IT-friendly price. This means users can cut down on the resources needed to manage their infrastructure and make deployment much simpler. They can use the same equipment across both IT and OT applications, making it easier to standardize and maintain their technology setup. So the approach for CAF-0121 allows business to adapt to different environments without needing separate solutions for each scenario, which is really an exciting product.

Christina Cardoza: Yeah, that’s great that you developed the CAF-0121 to help businesses in all of their needs. It occurs to me as we’re talking about this, the different temperature ranges that they need to meet, the cost ranges, that not only are businesses having challenges, but sometimes it can be challenging for partners like CASwell to create these solutions that meet their demand.

So, I’m just curious if there’s any insight that you can provide when developing this product, if you guys had any challenges to meet all of these demands and how you were able to overcome them?

CK Chou: The technology around the thermoelectric module—we call it TEM—is the one we are relying on for CAF-0121. TEM is already a proven solution for cooling overheating components. It is common in things like medical devices, car systems, refrigerators, water coolers, and other equipment that needs quick and accurate temperature control.

These slim devices work on creating a temperature difference when electric current passes through them, causing one side to heat up and the other side cool down. The more current we send through, the bigger the temperature difference we get between the two sides. And of course TEM does not run on its own. It is controlled by a microcontroller and the thermal sensor that monitors the temperature inside the device. The firmware that we have programmed into the microcontroller takes those temperature readings and decides when to turn the TEM on and how much current we should send through.

We have gone through countless trials and adjustment with the firmware settings to ensure our equipment stays in the ideal temperature range. And we also had to watch out about the condensation reaction, because if a TEM cools down too quickly, it can cause moisture to form on the module surface. And if the moisture gets onto the circuit board, it could cause serious damage. So an appropriate liquid isolation solution between moisture and a circuit board is also necessary.

While people are normally using the cooling capability of the TEM, we had a different idea of why not leverage both the cooling and heating capability to help our edge device to operate in a wider temperature range? So the overall concept is that by leveraging the heating capability of the TEM, we can indirectly expand the operation temperature range of the system to a lower degree. And, conversely, by using the cooling capability it can cool down the system when the internal ambient temperature rises to a certain high level.

Let me say it in a simple way. When the room is getting cold, TEM operates as a heater; and when a room is getting hot, TEM operates as a cooler. With a TEM, we are no longer limited to the operation temperature range of the individual components we have selected. It helps us bridge the gap, allows us to expand the temperature range of our equipment beyond what the components could typically allow. This means we can push the temperature boundaries by using the TEM and the device can still maintain reliability.

And some people might think, why don’t we just use industrial-grade components that support a wider temperature range and make our life easier? Reality is those wide-temp components can sometimes cost twice as much as standard commercial ones, plus the general chassis designed for this case is usually large and heavy. And then of course the most important reason is if we build our equipment just like everyone else, why would customers choose us over the competition? If that is the case, CAF-0121 would just end up being another costly device with bulky thermal fans designed to support wide temperature ranges, and this is not what we want.

That’s why we have put a lot of effort into studying the characteristics of the TEM more closely and focusing on selecting the right thermal-conductivity materials, fine-tuning our firmware settings, and testing our device in temperature-control chambers day and night. Our goal is to redefine what edge computing hardware can be by offering solutions that are adaptable to various temperature environments, compact and lightweight, and also still being competitively priced.

Christina Cardoza: Yeah, it’s amazing to hear those different wide ranges of temperature environments you were mentioning in cars and refrigerators, so I can see the importance of making sure that it’s consistently reliable and it provides that performance.

So, do you have any customers that have actually been using CAF-0121 and anything you can share with how they’re using it or in what type of solutions it is in?

CK Chou: This box is going to mass production in October this year, which is the next month, and we have already got a few thousand purchase orders from a major European customer focused on cybersecurity applications and planning to use this device in small office, warehouse, and possibly outdoor cabinets for electric-vehicle charging stations that need wider temperature support. This really highlights the advantage of CAF-0121. The customer can use it across both IT and OT applications without needing separate solutions for different operation temperature conditions, and of course saving customers from having to spend extra money.

We also sent samples to around seven to eight potential customers across various industries here, including cybersecurity, SD-WAN, manufacturing, and telecom companies for instant traffic management. The feedback has been fantastic. Everyone loves the competitive price, which makes our device a great deal. And also the compact size is another big win, because it can fit into tight spaces and helps lower our shipping cost. Also, reduces the carbon footprint.

You know, in today’s market, pricing is a huge factor. We need to do cost-effective solutions but cannot compromise on performance and flexibility. So it’s clear that our approach is hitting the mark for customers who need the reliable and scalable edge solutions that don’t break their bank. The excitement we are seeing from these industries really proves that we are on the right track, and CAF-0121 is exactly the kind of solution that can make their needs.

Christina Cardoza: I can definitely see why the solution needs to be smart and compact, but then also fast and reliable, high performance. So, I’m curious how you actually make that happen. And I should mention “insight.tech Talk” and insight.tech as a whole, we are sponsored by Intel, but I know Intel has a lot of processors that make these devices possible, that make them be able to run fast in these different environments and in these small form factors. So, I want to hear a little bit more about how you work with technology partners like Intel in making your product line possible.

CK Chou: As we discussed earlier, a solid edge computing device should have just the right processing power packed in a compact size, a variety of connection options, energy efficient, and of course a competitive price. These are really the basic must-haves for any edge computing device.

That’s why we have chosen the Intel Atom processor for this project. With the Atom we can provide the right level of performance and still keep power consumption low. And also thanks to Intel LAN controller that helps us easily add the support for 2.5 gig Ethernet to this box to ensure the capability with most infrastructure requirements and more. The Atom has built-in instructions that can accelerate IPsec traffic, making it an excellent choice for security-focused applications. So, whether you are dealing with data encryption, secure communications, or other security jobs, this processor is up to the challenge.

And if we wanted to further enhance the security, Atom is also integrated with BIOS Guard and Boot Guard to provide a hardware root of trust. With these two guards we are not just talking about great performance and efficiency, we are delivering a high level of protection for the BIOS and the boot-up process. This level of security is crucial, especially for edge devices that need to handle sensitive information and critical tasks without compromising protection.

I can say that only Intel offers a one-stop shop for all these features among the various players in this market. They don’t just provide the hardware, but also the driver and firmware support. This level of integration has made the development of the CAF-0121 project so much easier, and it has really shortened our time-to-market. When you have got the processing power, security features, and even software support all coming from one reliable partner, Intel, it certainly streamlines the whole process. This not just simplifies the engineering and development work but also ensures everything works seamlessly together.

So, with Intel’s comprehensive support, the hardware designer—like CASwell—can focus more on optimizing performance and less on troubleshooting capability issues. This is a big win for both us and our customers, allowing us to deliver high-quality, reliable edge computing solutions faster and efficiently.

Christina Cardoza: Absolutely; that’s great to hear. And I’m sure—we kept talking about in this conversation making things more cost effective, more affordable, so I’m sure being able to leverage the technology expertise or the technology processor and other elements from a partner like Intel, that helps you be able to focus on your sweet spot and not have to build things from scratch and make things more expensive than they need to be. So, great to hear how you’re using all of that different technology.

It’s been a great conversation. You’ve really been able to take a technical topic and make it more digestible and understandable. Unfortunately, we are running out of time, but before we go I just want to throw it back to you one last time, if you have any final thoughts or key takeaways you want to leave our listeners with today.

CK Chou: I started working at CASwell 10 years ago, and things were pretty different back then. At that time most of the processing power was centralized. Companies were all about making their server super powerful, giving them the fast internet connections for gathering all the data from the edge. Servers were packed with multiple features to handle every use case you could imagine.

Times have changed. It’s all about instant processing and real-time AI calculations. Businesses need to make quick decisions right at the source of the data instead of sending everything back to the central server. That’s why edge computing has become such a big deal. It lets companies process data on the spot without any delay.

But when all the network players are shifting toward edge solutions, the real challenge is how do we make our equipment different and better than everyone else? So this project, CAF-0121, we have gained some really valuable know-how using an old-school technology as an innovative thermal solution for edge equipment and tried to bring added volume to our products in this highly competitive market. We also want this small success to inspire our R&D team to stay creative and think outside the box, and not just stick to the traditional way of doing things.

Also, thanks to the support from Intel about their edge solutions, including edge-optimized processors—which build in deep learning–inference capabilities—various LAN options for different connectivity needs; and of course including all the documents for integration, drivers, and firmware support. This collaboration has really helped us push our designs to the next level.

Finally, our goal is very simple: aiming to set a new standard of edge computing equipment and providing flexible edge solutions to help customers tackle challenges from the cloud and through the network and all the way to the intelligent edge.

Christina Cardoza: Well, I can’t wait to see what else CASwell does in this space—as well as the CAF-0121 when that comes—different market solutions that companies are going to be leveraging this for. I invite all of our listeners to visit the CASwell website, contact them, see how they can help you in all of your edge and network-edge needs. As well as visit insight.tech as we continue to cover partners like CASwell and how they’re innovating in this space.

So, I want to thank you again for joining us today, CK, as well as our listeners for tuning in. Until next time, this has been “insight.tech Talk.”

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Intel® Xeon® 6 Processors Power AI Everywhere

Organizations worldwide deploy AI to increase operational efficiencies and increase their competitive standing in the market. We talk to Intel’s Mike Masci, Vice President of Product Management, Network & Edge Group, and Ryan Tabrah, Vice President & General Manager, Intel Xeon and Compute Products, about the new Intel® Xeon® 6 Processors. Mike and Ryan discuss key advancements that power the seamless and scalable infrastructure required for running AI everywhere—from the data center to the edge—in a more sustainable way.

Why is the launch of the Intel Xeon 6 Processors so important to Intel, your partners, and customers?

Ryan Tabrah: The launch is a culmination of many things, including getting back to our roots of delivering technology starting from the fabrication process to enable the AI data center of the future. I think Intel Xeon 6 hits at a perfect time for our customers to continue to innovate with their solutions and build out their data centers in a way that wasn’t possible before. With Intel Xeon 6 processors, E-cores are optimized for the best performance per watt, while the P-cores bring the best per-core performance for compute-intensive workloads that are pervasive in the data centers of today.

Mike Masci: We see Xeon 6 not just as another upgrade, but as a necessity for the AI-driven compute infrastructure. The existing data center does not have the performance per-watt characteristics that allow data to scale for the needs of an AI-driven era. So whether it be networks needing to process huge amounts of data from edge AI to cloud AI, the these processors do so in a more efficient and performant way. And within a data center, it enables the infrastructure to support the performance needs of AI while being able to scale linearly.

The consistency of the Xeon 6 platform from edge to cloud and the fact that it can really scale from the very high end to more cost- and power-focused, lower-end products is what developers want. They want an extremely seamless experience where there is no need to mix and match different architectures and systems, because anything that slows them down or creates friction effectively is less time spent on developing AI technology.

Intel Xeon 6 is the first Intel Xeon with efficient cores and performance cores. What are some examples of their different workloads and relevant use cases?

Mike Masci: First, efficient cores are designed and built for data center class workloads and are highly performant at optimized density and power levels. This is a huge advantage for our customers in terms of composability and the ability to partition the right product for the right workload in the right location without having to incur complexity and expense of both managing and deploying.

It’s becoming the norm to deploy the same type of workloads at the network edge that are running deep into the data center. People want the same infrastructure back and forth, so it enables them to deploy faster and easier, and save money in the long run.

The most important workloads are cloud native. And that’s where the Intel Xeon 6 E-cores shine. As we think about use cases that take advantage of that, on the network and edge side, the 5G wireless core is one of our most important segments. Where in prior generations it was fixed-function, proprietary hardware, these companies have adopted the principles behind NFV (Network Functions Virtualization) and SDN (Software Defined Networking) and are now moving toward cloud-native technology. This is where the multi-thread performance per-watt optimized piece of Intel Xeon 6 processors is extremely important.

As we look at Intel Xeon 6 with P-cores for other edge applications, customers are very excited about Intel® Advanced Matrix Extensions (Intel® AMX) technology. Specifically, its specialized vector ISA instructions, inherent in the performance cores, allow them to do lightweight inference on the edge where you might not have the power budget for large-scale GPUs that are typical of training clusters. And the beauty of AMX is it’s seamless from a software developer standpoint, and with tools like OpenVINO and our AI Suites, they can take advantage of AMX without having to know how to program to a specific ISA.

Ryan Tabrah: The reality is that, especially at the edge, customers can’t put in some of the more power-hungry or space-hungry accelerators, and so you fall back on the more dense solutions that are already integrated into the Intel Xeon 6 performance core family.

Video is another good use case example. You don’t make money until you can effortlessly scale and pull videos out and push them to the end user. That’s one reason why we focused on the rack consolidation ability in taking a video workload. It’s something like three-to-one rack consolidation over previous generations for the same amount of videos that you can stream at the same performance. It’s better performance at a better energy efficiency in your data center, being able to serve more clients with fewer machines and greater density. And that same infrastructure can then be pushed out to your 5G networks, to the edge of your network where you’re caching videos and deploying them to end users.

Can you talk about the Intel Xeon 6 in the context of a specific vertical and use case?

Mike Masci: Take healthcare, where you need a massive amount of data to train medical image models. In order to have actionable data and insights, you need to train the model in the cloud and run it effectively at the edge. You need to run things like RAG (Retrieval Augmented Generation) to make sure the model is doing what it’s supposed to do, especially in the domain of assisting with diagnosis, for example. So what happens when you need to retrain the model? Edge machines will send more data to the cloud, where it gets retrained, and then has to get proliferated back to those edge machines. That whole process for a developer in DevOps and MLOps is an entire discipline, and it’s probably the most important discipline of AI today.

We think that the real value of AI is going to be meaningfully unlocked when you can have trained models, then you can deploy them at the edge, you can then have the edge refeed the models to get trained in the cloud. And having them on a scalable system matters a lot to developers.

Ryan Tabrah: Also, healthcare facilities around the world have a lot of older code, older applications running on kernels that they don’t want to upgrade or do any work. They want to be able to move those workloads, maybe even containerize them, put them on a system they know will just run and they don’t have to touch a thing. We enable them with open-source tools to update the parts of their infrastructure, and new data centers to bring the future into, and connect with, their older application base.

And that’s where the magic really happens, that someone doesn’t fundamentally have to start from ground zero. Healthcare institutions have all this old data, old applications, and then they’re being pushed to go do all these new things. And that’s back to Mike’s earlier comment that just having a consistent platform underneath from your edge to the actual cloud where you’re doing your development to even to your PC, they just don’t have to worry about it.

What are the sustainability aspects that Xeon 6 can bring to your customers?

Mike Masci: The performance-per-watt improvements across some of our network and edge workloads is clear. It’s 3x performance per watt versus 2nd Gen Intel® Xeon® Scalable Processors. Simply translated, if you get 3X performance per watt, effectively you can reduce the number of servers that you need by one third. That doesn’t just save you CPU power, but it saves you the power of the entire system, whether it be the switches or the power supply of the rack itself or any of the peripherals around it.

And it’s our mandate as Intel to drive that type of sustainability mechanism, because in large part the CPU performance per watt dictates the choices that people make in terms of deploying overall hardware.

A great example is the work we’ve done with Ericsson, a leading ISV provider in the 5G core. In their own real-world testing on UPF, which is the user plane function of the 5G Core, they had 2.7x better performance per watt versus the previous generation. Even more, in the control plane with 2 million subscribers, Ericsson supported the same number of subscribers with 60% less power versus prior generation. This comes back to the performance per watt and sustainability. But it is also about significant OpEx saving and doing a lot of good for the world as well. With Ericsson, we are proving it’s not just possible, but it’s happening in reality today.

In this domain we have our infrastructure power manager, which allows for dynamically programming the CPU power and performance based on actual usage. For example, when the load is low, the CPUs power themselves down. And underlying that is the entirety of the product line has huge improvements in terms of what we would call load line performance. Most servers today are not run at full utilization all the time. Intel CPUs like the Intel Xeon 6 do a great job of powering down to align with lower utilization scenarios, which again lowers overall power need—improving platform sustainability.

This seems fundamental, but it’s harder to do than you would think. You need to optimize at an operating-system level to be able to take advantage of those power states. You need to make sure that you have the right quality of service, SLA, and uptime, which is a huge deal.

Ryan Tabrah: The efforts we make across the board—in our fabrication, our validation labs, and our supply chain that feeds all our manufacturing—demonstrates our leadership in sustainability. When a customer knows they’re using Intel silicon, they know that when it was born or tested or validated or created, it was done in the most sustainable way. We’re also continuing to drive leadership in different parts of the world around reuse of water and other things that give back to the environment as we build products.

Intel Xeon 6 offers our customers the opportunity to meet their sustainability goals as well. With the high core counts and efficiency that Intel Xeon 6 brings, our customers can look to replace aging servers in their data center and consolidate to fewer servers that require less energy and floor space.

Let’s touch on data security and Intel Xeon 6 enhancements that make it easier for developers to build more secure solutions.

Mike Masci: As we look at security enhancements, which is paramount, especially on the network and edge, bringing our SGX and TDX technologies was a big addition. But technology to maturity in terms of security ecosystem is extremely important for customers, especially in an AI-driven world. You need to have model security. You need to be able to have secure enclaves if you’re going to run multi-tenancy, for example, which is becoming extremely important in a cloud-native-driven world. And overall, we really see that maturity of security technologies on Intel Xeon 6 being a differentiator.

Ryan Tabrah: We built Intel Xeon 6 and the platform with security as the foundational building block from the ground up. It’s what we’ve been doing for several generations of Xeon, and we’re making confidential computing as easy and fundamental as possible in the partner ecosystem. With Intel Xeon 6 we are introducing new advances in quantum resistance and platform hardening to enable customers to meet their business goals with security, privacy, and compliance.

Is there anything that you’d like to add in closing?

Mike Masci: Intel Xeon 6 is in a position that’s necessary for AI at the edge and in the network. And we think the idea of making an easy, frictionless platform that also serves multiple workloads easily with composability, is a home run. To me that is the key message of Intel Xeon 6. It’s seamless and scalable so that you can have the same application running on the edge that you have in the data center and without worrying about what hardware it’s running on.

Ryan Tabrah: I agree. Especially in different environments and areas where people are just fundamentally running out of power in their data centers, whether it’s just because they can’t build them fast enough or there are new restrictions and clean energy requirements. We have the solutions in place from their edge to their data centers that just make it super easy for them to see the benefits.

And the best validation, I think, is that it is the feedback from the customers. They want more of it. They want to do more with us. They want to help us not only ramp up the processors as quickly as possible, but then build the next generation as quickly as possible, too. Because they’re excited that Intel is taking a leadership position in key critical parts of telco, edge buildout, infrastructure buildout, and data center, and we are excited to be leading with them.

 

Edited by Christina Cardoza, Editorial Director for insight.tech.

Technology Partnerships Pave the Way to Business Solutions

Many enterprises are eager to adopt the latest technologies, which can help them supercharge efficiency and light the way to better products and services. Innovative solutions are emerging rapidly, offering early adopters an attractive competitive edge. Yet deploying fully integrated solutions is so complicated and time-consuming that many organizations give up after initial trials.

Working with an experienced technology partner like a solutions aggregator can ease frustrations of technology adoption and pave the way to successful deployment. A knowledgeable aggregator can offer end-to-end help specifically designed to accommodate a company’s existing infrastructure and future goals. Some can open the door to a worldwide network of partners and systems integrators. By overseeing the entire process of solution selection, integration, deployment, and scaling, an expert aggregator can proactively remove stumbling blocks and enable companies to get the right solutions up and running quickly at locations across the globe. 

Technology Integration Roadblocks

Enterprises struggle to incorporate new technology for several reasons. Solutions usually require a mix of hardware and software components that must work together seamlessly and fit—or be made to fit—existing infrastructure. Since many large companies use different combinations of legacy and newer technology in different locations, assessing interoperability is a complex process. Multinational firms must also consider regional technology standards and regulations.

“If you’re an IT leader with dozens of objectives on your desk, the last thing you want to do is become a general contractor for every solution,” says Matt Powers, Vice President of Technology and Support Services at Wesco, a leading global supply chain solutions provider. “Engineering the design and choosing multiple contractors for each project becomes an elaborate exercise.”

And during that exercise, the connective infrastructure may shift, adding further complications. “You cannot imagine how fast solutions are evolving,” Powers says. “Innovations are constantly changing the interdependencies between technologies.”

Another hurdle is scaling. Companies often test potential solutions with encouraging results, only to be disappointed when they try to extend deployments.

“You see that a lot, especially with IoT solutions,” Powers says. “Companies will tell us, ‘We’re running our proof of concept (POC) and seeing the results and value we want, but now how do we scale this solution across our global enterprise?’ This is a major challenge for global customers as they need to access and localize technology for different regions. Additionally, they need to identify and work with deployment partners, such as integrators and contractors, to ensure the solution is implemented effectively.”

Technology Partnerships Deliver Innovative Solutions

With more than 100 years of experience as a solutions aggregator and distributor, Wesco can help a wide range of enterprises—including manufacturers, utilities, data centers, retail, and hospitality companies—avoid implementation problems and deploy the right solutions efficiently. The process begins with obtaining a thorough understanding of an organization’s needs.

“What we do differently from other companies is work very closely with stakeholders to understand their particular challenges and assess their opportunities for adding efficiencies or gaining return on their investments,” Powers says. “Once we do that, we can help lead them to the right ecosystem of solution providers and integrators.” To this point, Wesco’s vetted partner network includes more than 50,000 suppliers of hardware, software, and cloud solutions, and integrator partners across the globe.

“What we do differently from other companies is work very closely with stakeholders to understand their particular challenges and assess their opportunities for adding efficiencies or gaining return on their investments.” – Matt Powers @wescocorp via @insightdottech

“The number-one quality we look for in technology partners is their capacity for innovation,” Powers says. “Intel brings us a wide breadth of leading technologies, and the open architecture of its products allow our solutions providers and independent software vendors (ISVs) to develop platforms a variety of end users can access.”

Technology Integration: A Win-Win for Customers and Providers

Wesco strives to be a trusted advisor—suggesting the components, solutions, and partners that work best for each company’s unique environment. WaitTime, an ISV that builds crowd analytics solutions, is one example of how Wesco deploys complete solutions for its customers—from the network edge to the cloud.

WaitTime software applies Edge AI to computer vision cameras, providing information like capacity, crowd density, and shopper insights to venue operators. The software—powered by Intel—is optimized to process data on-site and provide alerts in near-real time.

With WaitTime, companies can catch and solve crowd problems sooner, pre-empting potential hazards and dispatching employees to chokepoints before problems occur. They can also learn where guests or shoppers spend their time, or which areas would benefit from wider pathways, better wayfinding, or other improvements. Making these changes can lead to higher revenue at shops or concession stands.

While using WaitTime is simple to use once it’s set up, deployment involves far more than installing the platform. “The software is one piece of the technology. We can bring the other hardware and installation partners together to build an end-to-end solution,” Powers says.

What kind of providers? That depends on the organization.

Companies may or may not be able to upgrade existing security cameras with computer vision. And they must find networks and hardware that can reliably transmit and process enormous volumes of information while meeting all local security and privacy regulations.

These are just a few pieces of the puzzle that organizations must connect before deployment. Wesco can help them make sound decisions and select the right contractors and systems integrators to build, harmonize, and scale all elements of the solution according to their needs.

Technology and Experience Accelerate Business Success

As technology change accelerates, cutting-edge solutions become increasingly important to success, Powers says. “Innovation is rippling across industries quickly, and competition is not slowing down. By understanding how new technologies can help the business and how to deploy them at scale, enterprises can continue to thrive as new capabilities emerge.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

3D LiDAR Delivers Spatial Intelligence

Picture the last time you were at a concert or a similar large event in the heart of a busy city. You probably struggled to find parking or went through security clearance hassles at the gate. And it was not easy to battle lines at the concession stand or the restroom. Eliminating these annoyances would dramatically improve visitor experiences. It would help event organizers, too.

It’s why airports, city governments, and entertainment venues bet on LiDAR (Light Detection and Ranging), a pulsed laser-based technology specially suited to delivering spatial intelligence. Using LiDAR delivers information not just on the numbers of people and vehicles but their flow and interactions. This means that organizers can maintain security and spot and alleviate bottlenecks in real time.

Advantages of 3D LiDAR

“At large infrastructure sites and events, security and crowd management are not easy, but they’re jobs LiDAR is especially good at,” says Raul Bravo, President and Founder of Outsight, leader in 3D LiDAR solutions.

“While most people might think of CCTV and IP video cameras when it comes to monitoring devices, their two-dimensional capabilities are limited for tracking a three-dimensional physical world,” Bravo says. “Unlike traditional computer vision, LiDAR can’t tell if a person is wearing a red shirt or green, but it knows that person’s speed or location—while delivering data as a 3D capture.”

Because of LiDAR’s capabilities, digital twins have been using the technology to obtain data about the physical world for a while now. “What is emerging is using LiDAR technology to not only map the static physical world but to digitize the real-time movement of people and vehicles,” Bravo says.

Also reassuring to organizations is that LiDAR is a natively anonymous solution. Because LiDAR does not capture images but only the distances of things, privacy is baked in by definition. Monitoring crowds without capturing people’s pictures and maintaining privacy is key to meeting a host of governmental regulations.

Because of LiDAR’s capabilities, #DigitalTwins have been using the #technology to obtain #data about the physical world for a while now. @Outsight_tech via @insightdottech

3D LiDAR Data Processing Challenges

While LiDAR has many advantages, processing the resulting data is not easy. Plugging 3D spatial intelligence data into traditional computer vision techniques delivers poor outcomes. Instead, “you have to create specific algorithms and techniques that tackle this specific kind of problem,” Bravo says.

Also challenging is the sheer volume of data that LiDAR generates. “When we deploy LiDAR at some of the biggest airports in the world, we have hundreds of LiDAR units at the same time,” Bravo says. “The data from each is the equivalent of a hundred people streaming Netflix.”

The diversity of designs, models, and manufacturers in the LiDAR space is also a problem. The Outsight platform solves for all these challenges and works with any LiDAR manufacturer or model. The solution develops living digital twins, feeding information about the physical world at a rapid-enough clip—20 times per second—to deliver insights in real time. These insights can route to the right person and take the form of alarm alerts.

Visual-Spatial Intelligence Use Cases

The problems that Outsight can help resolve apply to smart cities, transportation hubs, and international events. If too many people are crowded in airport baggage check-in areas, the solution can alert officials downstream of potential traffic jams in the security lines, which they can then staff for.

For example, the city of Bellevue, Washington, uses Outsight’s LiDAR solution to detect problems with near-miss situations at intersections, when vehicles get too close to bicycles or pedestrians. LiDAR was especially well-equipped to capture such incidents at night. The information has helped the city take more proactive measures such as clearer lane markings to meet its goal of eliminating traffic fatalities and severe injuries by 2030. Outsight LiDAR can also help smart cities address traffic flow problems in real time. For example, if a vehicle uses the wrong lane for merging, a flashing light can alert the driver to remedy the mistake.

When managing the physical flow of people and vehicles on a massive scale, you have to ensure a smooth visitor experience and operational excellence. Key performance indicators like the length of a ticketing line and time spent in one will matter.

At the 2024 Olympics in Paris, Outsight’s LiDAR solution helped with security and crowd management. Outsight was again pressed into service at Tomorrowland, one of the world’s largest music festivals, held annually in Belgium, which attracts hundreds of thousands of attendees.

The Technology Infrastructure for Spatial Intelligence

Spatial intelligence is about digitizing the physical world and creating insights out of it. Achieving this requires processing power that can handle large amounts of data with efficiency. Outsight depends on Intel products and technologies to deploy its solutions at scale.

“You need specific and highly efficient software algorithms that use CPU-based and not GPU-based solutions for energy efficiency,” Bravo says, which is what Intel CPUs deliver.

As for the future, Bravo is excited about the many possibilities—beyond smart cities, airports, and venues—for the digitization of the physical world. “You can have access to a wealth of unique insights and intelligence that was not even imagined before,” Bravo says. “We are entering a new world of digital transformation.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Multisensory AI: Reduce Downtime and Boost Efficiency

When you’re waiting by the side of the road for the tow truck, isn’t that always the moment when you realize you’ve neglected your 75,000-mile tuneup and safety check? The “check oil” light and low-tire pressure alert can avert dangerous situations, but you can still end up in that frustrating and time-consuming breakdown. Now scale up that inconvenience and lost productivity to the size of a factory, where nonfunctioning machinery can result in hugely expensive downtime.

That’s where predictive maintenance comes in. Machine learning can analyze patterns in normal workflow and detect anomalies in time to prevent costly shutdowns; but what happens with a new piece of equipment, where AI has no existing data to learn from? Can some of the attributes that make humans good—if inefficient—at dealing with novel situations be harnessed for machine-based inspections?

Rustom Kanga, Co-Founder and CEO of AI-based video analytics provider iOmniscient, has some answers for these and other questions about the future of predictive maintenance. He talks about the limitations of traditional machine learning for predictive maintenance; when existing infrastructure can be part of the prediction solution—and the situations when it can’t—and what in the world an e-Nose is (Video 1).

Video 1. Rustom Kanga, CEO of iOmniscient, discusses the impact of multisensory and intuitive AI on predictive maintenance. (Source: insight.tech)

What are the limitations to traditional predictive maintenance approaches?

Today when people talk of artificial intelligence, they normally equate it to deep learning and machine learning technologies. For example, if you want the AI to detect a dog, you get 50,000 images of dogs and label them: “This is a dog. That is a dog. That is a dog. That is a dog.” And once you’ve trained your system, the next time a dog comes along, it will know that it is a dog. That’s how deep learning works.

But if you haven’t trained your system on some particular or unique type of dog, then it may not recognize that animal. Then you have to retrain the system. And this retraining goes on and on and on—it can be a forever-training.

The challenge with maintenance systems is that when you install some new equipment, you don’t have any history of how that equipment will break down or when it will break down: You don’t have any data for doing your deep learning. And so you need to be able to predict what’s going to happen without that data.

So what we do is autonomous, multisensory, AI-based analytics. Autonomous means there’s usually no human involvement, or very little human involvement. Multisensory refers to the fact that humans use their eyes, their ears, their nose to understand their environment, and we do the same. We do video analysis, we do sound analysis, we do smell analysis; and with that we understand what’s happening in the environment.

How does a multisensory AI approach address some of the challenges you mentioned?

We have developed a capability called intuitive AI. Artificial intelligence is all about emulating human intelligence, and humans don’t just use their memory function—which is essentially the thing that deep learning attempts to replicate. Humans also use their logic function. They have deductive logic, inductive logic; they use intuition and creative capabilities to make decisions about how the world works. It’s very different from the way you’d expect a machine learning system to work.

“Multisensory refers to the fact that humans use their eyes, their ears, their nose to understand their environment, and we do the same” – Rustom Kanga, @iOmniscient1 via @insightdottech #AI

What we as a company do is we use our abilities as humans to advise the system on what to look for, and then we use our multisensory capabilities to look for those symptoms. For instance, if a conveyor belt has been installed and we want to know when it might break down, what would we look for to predict that it’s not working well? We might listen to its sound: when it starts going “clang, clang, clang,” something is wrong with it. So we use our ability to see the object, to hear it, to smell it to tell us how it’s operating at any given time and whether it’s showing any of the symptoms that we’d expect it to show when it’s about to break down.

How do you train AI to do this, and to do it accurately?

We tell the system what a person would be likely to see. For instance, let’s say we’re looking at some equipment, and the most likely break-down scenario is that it will rust. We then tell the system to look for rust or for changes in color. Then, if the system sees rust developing, it will tell us that there’s something wrong and it’s time to look at replacing or repairing the machine.

And intuitive AI doesn’t require massive amounts of data. We can train our system with maybe 10 examples of the data set, or even fewer. And because it requires so few data sets, it doesn’t need massive amounts of computing; it doesn’t need GPUs. We work purely on the standard Intel CPUs, and we can still achieve accuracy.

We recently implemented a system for a driverless train. The customer wanted to make sure that nobody could be injured by walking in front of the train. That really requires just a simple intrusion system. In fact, camera companies provide intrusion systems embedded into their cameras. And the railway company had done that—had bought some cameras from a very reputable company to do the intrusion detection.

The only problem was that they were getting something like 200 false alarms per camera per day, which made the whole system unusable. So they set the criterion that they wanted no more than one false alarm across the entire network. We were able to achieve that for them, and we’ve been providing the safety system for their trains for the last five years.

Do your solutions require the installation of new hardware and devices?

We can work with anybody’s cameras, anybody’s microphones—of course, the cameras do have to be able to see what you want to be seen. Then we provide the intelligence. We can work with existing infrastructure for video, for sound.

Smell, however, is a very unique capability. Nobody makes the type of smell sensors that are required to detect industrial smells, so we have built our own e-Nose to provide to our customers. It’s a unique device with six or so sensors in it. There are sensors on the market, of course, that can detect single molecules. If you want to detect carbon monoxide, for example, you can buy a sensor to do that. But most industrial chemicals are much more complex. Even a cup of coffee has something like 400 different molecules in it.

Can you share any other use cases that demonstrate the iOmniscient solution in action?

I’ll give you one that demonstrates the real value of a system like this in terms of its speed. Because we are not labeling 50,000 objects, we can actually implement the system very quickly. We were invited into an airport to detect problems in their refuse rooms—the rooms under the airport where garbage from the airport itself and from the planes that land there is collected. This particular airport had 30 or 40 of them.

Sometimes, of course, garbage bags break and the bins overflow, and the airport wanted a way to make sure that those rooms were kept neat and tidy. So they decided to use artificial intelligence systems to do that. They invited something like eight companies to come in and do proofs of concept. They said, “Take four weeks to train your system, and then show us what you can do.”

After four weeks, nobody could do anything. So they said, “Take eight weeks.” Then they said, “Take twelve weeks.” And none of those companies could actually produce a system that had any level of accuracy, just because of the number of variables involved.

And then finally they found us, and they asked us, “Can you come and show us what you can do?” We sent in one of our engineers on a Tuesday afternoon, and on Thursday morning we were able to demonstrate the system with something like 100% accuracy. That is how fast the system can be implemented when you don’t have to go through 50,000 sets of data for training. You don’t need massive amounts of computing; you don’t need GPUs. And that’s the beauty of intuitive AI.

What is the value of the partnership with Intel and its technology?

We work exclusively with Intel and have been a partner with them for the last 23 years, with a very close and meaningful relationship. We can trust the equipment Intel generates; we understand how it works, and we know it will always work. It’s also backward compatible, which is important for us because customers buy products for the long term.

How has the idea of multisensory intuitive AI evolved at iOmniscient?

When we first started, there were a lot of people who used standard video analysis, video motion detection, and things like that to understand the environment. We developed technologies that worked in very difficult, crowded, and complex scenes, and that positioned us well in the market.

Today we can do much more than that. We do face recognition, number-plate recognition—which is all privacy protected. We do video-based, sound-based, and smell-based systems. The technology keeps evolving, and we try to stay at the forefront of that.

For instance, in the past, all such analytics required the sensor to be stationary: If you had a camera, it had to be stuck on a pole or a wall. But what happens when the camera itself is moving—if it’s a body-worn camera where the person is moving around or if it’s on a drone or on a robot that’s walking around? We have started evolving technologies that will work even on those sorts of moving cameras. We call it “wild AI.”

Another example is that we initially developed our smell technology for industrial applications—things like waste-management plants and airport toilets. But we have also discovered that we can use the same device to smell the breath of a person and diagnose early-stage lung cancer and breast cancer.

Now, that’s not a product we’ve released yet; we’re going through the clinical tests and clinical trials that one needs to go through to release it as a medical device. But that’s where the future is. It’s unpredictable. We wouldn’t have imagined 20 years ago that we’d be developing devices for cancer detection, but that’s where we are going.

Related Content

To learn more about multisensory AI, listen to Multisensory AI: The Future of Predictive Maintenance and read Multisensory AI Revolutionizes Real-Time Analytics. For the latest innovations from iOmniscient, follow them on X/Twitter at @iOmniscient1 and LinkedIn.

 

This article was edited by Erin Noble, copy editor.

Multisensory AI: The Future of Predictive Maintenance

Downtime is a costly killer. But traditional predictive maintenance methods often fall short. Discover how multisensory AI is used to uplevel equipment maintenance.

Multisensory AI uses sight, sound, and smell to accurately predict potential equipment failures, even with limited training data. This innovative approach can help businesses reduce downtime, improve efficiency, and save costs.

In this podcast, we explore how to successfully implement multisensory AI into your existing infrastructure and unlock its full potential.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guest: iOmniscient

Our guest this episode is Rustom Kanga, Co-Founder and CEO of iOmniscient, an AI-based video analytics solution provider. Rustom founded the company 23 years ago, before AI was “fashionable.” Today, he works with his team to offer smart automated solutions across industries around the world.

Podcast Topics

Rustom answers our questions about:

  • 2:36 – Limitations to traditional predictive maintenance
  • 4:17 – A multisensory and intuitive AI approach
  • 7:23 – Training AI to emulate human intelligence
  • 8:43 – Providing accurate and valuable results
  • 12:54 – Investing in a multisensory AI approach
  • 14:40 – How businesses leverage intuitive AI
  • 18:16 – Partnerships and technologies behind success
  • 19:36 – The future of multisensory and intuitive AI

Related Content

To learn more about multisensory AI, read Multisensory AI Revolutionizes Real-Time Analytics. For the latest innovations from iOmniscient, follow them on X/Twitter at @iOmniscient1 and LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to “insight.tech Talk,” where we explore the latest IoT, edge, AI, and network technology trends and innovations. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today I’m joined by Rustom Kanga from iOmniscient to talk about the future of predictive maintenance. Hi, Rustom. Thanks for joining us.

Rustom Kanga: Hello, Christina.

Christina Cardoza: Before we jump into the conversation, I love to get to know a little bit more about yourself and your company. So, what can you tell us about what you guys do there?

Rustom Kanga: I’m Rustom Kanga, I’m the Co-Founder and CEO of iOmniscient. We do autonomous, multisensory, AI-based analytics. Autonomous means there’s usually no human involvement, or very little human involvement. Multisensory refers to the fact that humans use their eyes, their ears, their nose, to understand their environment, and we do the same. We do video analysis, we do sound analysis, we do smell analysis, and with that we understand what’s happening in the environment.

And we’ve been doing this for the last 23 years, so we’ve been doing artificial intelligence long before it became fashionable, and hence we’ve developed a whole bunch of capabilities which go far beyond what is currently talked about in terms of AI. We’ve implemented our systems in about 70 countries around the world in a number of different industries. This is technology that goes across many industries and many areas of interest for our customers. Today we are going to, of course, talk about how this technology can be used for predictive and preventative maintenance.

Christina Cardoza: Absolutely. And I’m looking forward to digging in, especially when you talk about all these different industries you’re working in—railroad, airports. It’s extremely important that equipment doesn’t go down, nothing breaks, that we can predict things and don’t have any downtime. This has been something that I think all these industries have been looking to strive for quite some time, but doesn’t seem like we’ve completely achieved it, or there are still accidents, or the unexpected still happens. So I’m curious, when it comes to detecting equipment failure and predictive maintenance, what have been the limitations to traditional approaches?

Rustom Kanga: Today, when people talk of artificial intelligence, they normally equate it to deep learning and machine learning technologies. And you know what that means, I’m sure. For example, if you want to detect a dog, you’d get 50,000 images of dogs, you’d label them, and you say, “This is a dog, that’s a dog, that’s a dog, that’s a dog.” And then you would train your system, and once you’ve trained your system the next time a dog comes along, you’d know it’s a dog. That’s how deep learning works.

The challenge with maintenance systems is that when you install some new equipment, you don’t have any history of how that equipment will break down or when it’ll break down. So the challenge you have is you don’t have any data for doing your deep learning. And so you need to be able to predict what’s going to happen without the data that you can use for deep learning and machine learning. And that’s where we use some of our other capabilities.

Christina Cardoza: Yeah, that image that you just described—that is how I often hear thought-leaders talk about predictive maintenance, is the machine learning collecting all this data and detecting patterns. But, to your point, it goes beyond that. And if you’re implementing new technology or new equipment, how do you find that you don’t have that data and you don’t have that pattern?

I want to talk about first, though—the multisensory approach that you brought in your introduction, how does this address some of those challenges that you just mentioned and bring more of a natural, I guess, human inspection to predictive maintenance, human-like inspection?

Rustom Kanga: Well, it doesn’t involve human inspection. First of all, as we saw, you don’t have any data, right, for predicting how the product will break down. Well, very often with new products you might have a meantime between failures of, say, 10 years. That means you have to wait 10 years before you actually know how or when or why or how it’ll break down. So you don’t have any data, which means you cannot do any deep learning.

So what are the alternatives? We have developed a capability called intuitive AI which uses some of the other aspects of how humans think. Artificial intelligence is all about emulating human intelligence, and humans don’t just use their memory function, which is essentially what deep learning attempts to replicate. Humans also use their logic function. They have deductive logic, inductive logic; they use intuition and creative capabilities and so on to make decisions on how the world works. So it’s very different to the way you’d expect a machine learning system to work.

So what we do is we use our abilities as a human to advise the system on what to look for, and then we use our multisensory capabilities to look for those symptoms. For instance, just as an example, if a conveyor belt has been put in place, has been installed, and we want to know if it is about to break down, what would you look for to predict that it’s not working well? You might listen to its sound, for instance; you might know that when it starts going clang, clang, clang, that something’s wrong in it. So we can use our ability to see the object, to hear it, to smell it, to tell us how it’s operating at any given time and whether it’s showing any of the symptoms that you’d expect it to show when it’s about to break down.

Christina Cardoza: That’s amazing. And of course there’s no humans involved, but you’re adding the human-like elements into it, say that somebody manually inspecting would look for—if anything’s smoking, if they smell anything, if they hear any abnormal noises. So, how do you train AI to be able to provide this interactive or be able to detect these capabilities when it is just artificial intelligence or a sensor on top of a system?

Rustom Kanga: Exactly how you said you do it: you tell the system what you’re likely to see. For instance, let’s say you’re looking at some equipment, and the most likely scenario is that it’s likely to rust, and if it rusts there’s a propensity for it to break down. You then tell your system to look for rust, and over time it’ll look for the changes in color. And if the system sees rust developing, it’ll start telling you that there’s something wrong with this equipment. it’s time you looked at replacing it or repairing it or whatever.

Christina Cardoza: Great. Now I want to go back to training the AI and the data sets—like we talked about how do you do this for new equipment? I think there’s a misconception for a lot of providers out there that they need to do that extensive training that takes a long time; they need that data to uncover these patterns to learn from them, to identify these abnormalities. So, how is your solution or your company able to do this with less data sets but ensure that it is accurate and it does provide value and benefits to end user or organization?

Rustom Kanga: Well, as I said, the traditional approach is to do deep learning and machine learning, which requires massive data sets, and you just don’t have them in some practical situations. So you have to use other methods of human thinking to understand what is happening. And these are the methods which we call intuitive AI. They don’t require massive amounts of data; we can train our system with something like, maybe 10 examples of the data set or even less. And because you require so few data sets, you don’t need massive amounts of computing, you don’t need GPUs.

And so everything we do is done with very little training, with no GPUs. We work purely on the standard Intel CPUs, and we can still achieve accuracy. Let me give you an example of what I mean by achieving accuracy. We recently implemented a system for a driverless train system. They wanted to make sure that nobody walked in front of the train, because obviously it’s a driverless train and you have to stop it, and that requires just a simple intrusion system.

And there are hundreds of companies who do intrusion. In fact, camera companies provide intrusion systems as part of their—embedded into their cameras. And so the railway company we were talking to actually did that. They bought some cameras from a very reputable camera company and they could do the intrusion, the intrusion detection.

The only problem they had was they were getting something like 200 false alarms per camera per day, which made the whole system unusable. Then finally they set the criteria that they want no more than one false alarm across the entire network. And they found us, and they brought us in, and we could achieve that. And, in fact, with that particular train company we’ve been providing them with a safety system for their trains for the last five years.

So you can see that the techniques we use actually provide you with very high accuracy, much higher than you can get with some of these traditional approaches. In fact, with deep learning you have the significant issue that it has to keep learning continuously almost forever. For instance, you know the example I gave you of detecting dogs and recognizing dogs? You have 50,000 dogs, you train your system, you recognize the next dog that comes along; but if you haven’t trained your system on a particular type, unique type of dog, then the system may not recognize the dog and you have to retrain the system. And this type of training goes on and on and on—it can be a forever training. You don’t necessarily require that in an intuitive-AI system, which is type of technology we are talking about.

Christina Cardoza: Yeah, I could see this technology being useful in other scenarios too, rather than just different types of dogs. I know sometimes equipment moves around on a shop floor or things change, and if you move camera and positioning, usually you have to retrain the AI from there because of that relationship that has been changed. So it sounds like that’s something that it would be able to continue to provide the results without having to be completely retrained if you move things around.

In that railroad example that you gave, you mentioned how they installed cameras to do some of the things that they were looking to do. But if the—I know a lot of times manufacturers shops and the railroad systems, they have their cameras, they’re monitoring for safety and other things. Now, if they wanted to be able to take advantage of your capabilities on top of their already existing infrastructure, is that something that they would be able to do? Or does it require the installation of new hardware and devices?

Rustom Kanga: Well, in that example of the railway we use the existing cameras that they had put in in the first place. We can work with anybody’s cameras, anybody’s microphones. Of course the cameras are the eyes; we are only the brain. So the cameras have to be able to see what you want to see. We provide the intelligence, and we can work with existing infrastructure for video, for sound, for smell.

Smell is a very unique capability. Nobody makes the type of smell sensors that are required to actually smell industrial smells. So we have built our own e-Nose which we provide our customers with. It’s a unique device with something like six sensors in it. You do get sensors in the market, of course, for single molecules. So if you wanted to detect carbon monoxide, you can get a sense of carbon monoxide.

But most industrial chemicals are much more complex. For instance, even a cup of coffee has something like 400 different molecules in it. And so to understand that this is coffee and not tea you need a sensor of the type of our e-Nose which has multiple sensors in it and understanding the pattern that is generated across all those sensors. We know that it is this particular product rather than something else.

Christina Cardoza: So I’m curious, I know we talked about the railroad example, but since your technology spans across all different types of industries, do you have any other use cases or customer examples that you can share with us?

Rustom Kanga: Of course. You know, we have something like 300 use cases that we’ve implemented across 30 different industries, and if you just look at predictive maintenance, it could be a conveyor belt, as I said, that is likely to break down, and you can understand whether it’s going to break down based on its sound. It might be a rubber belt used in an elevator; it might be products that might rust and you can detect the level of rusting just by watching it, by looking at it using a camera. You can use smell; you can use all these different senses to understand what is the current state of that product.

And in terms of examples across different industries, I’ll give you one which demonstrates the real value of a system like this in terms of its speed. Because you are not labeling 50,000 objects you can actually implement the system very quickly. We were invited into an airport to detect problems in their refuse rooms. Refuse rooms are the garbage rooms that they have under the airport. And this particular airport had 30 or 40 of them where the garbage from the airport and from the planes that land over there and so on—it’s all collected over there.

And of course when the garbage bags break and the bins overflow, you can have all sorts of other problems in those refuse rooms, so they wanted to keep these neat and tidy. And to make sure that they were neat and tidy, they decided to use artificial intelligence systems to do that. And they invited, I think it was about eight companies to come in and do POCs over there—proofs of concept. Now they said, “Take four weeks. Train your system and show us what you can do.”

And after four weeks nobody could do anything. So they said, “Take eight weeks.” Then they said, “Take twelve weeks and show us what you can do.” And none of those companies could actually produce a system that had any level of accuracy, just because of the number of variables involved. There are so many different things that can go wrong in that sort of environment.

And then finally they found us, and they asked us, “Can you come and show us what you can do?” So we went, sent in one of our engineers on a Tuesday afternoon, and on that Thursday morning we were able to demonstrate the system with something like 100% accuracy. That is how fast the system can be implemented, because you don’t have to go through 50,000 sets of data that you have to train. You don’t need massive amounts of computing, you don’t need GPUs. And that’s the beauty of intuitive AI.

Christina Cardoza: Yeah, that’s great. And you mentioned you’re also using Intel CPUs. I should mention, insight.tech and the “insight.tech Talk,” we are sponsored by Intel. So I’m curious, how do you work with Intel? And the value of that partnership and the technology in making some of these use cases and solutions successful.

Rustom Kanga: Being a partner of Intel for the last 23 years, and so we work exclusively with Intel, we’ve had a very close and meaningful relationship with them over these years. And we find that the equipment that they generate has benefit in that it is—we can trust it, we know it’ll always work, we understand how it works. It’s always backward compatible, which is important for us because customers buy products for the long term. And because it delivers what we require, we do not need to use anybody else’s GPUs, and so on.

Christina Cardoza: Yeah, that’s great. And I’m sure they’re always staying on top of the latest innovation, so it allows you to scale and provides that flexibility as multisensory AI continues to evolve. So, since you said in the beginning you guys started with AI before it was fashionable, I’m curious, how has it evolved—this idea of multisensory intuitive AI? How has it evolved since you’ve started, and where do you think it still has to go, and how will the company be a part of that future?

Rustom Kanga: Well, it’s been a very long journey. When we first started we focused on trying to do things that were different to what everybody else did. There were a lot of people who used standard video analysis, video motion detection, and things like that to understand the environment. And we developed technologies that worked in very difficult, crowded, and complex scenes that positioned us well in the market.

Today we can do much more than that. We can—we do face recognition, number-plate recognition—it’s all privacy protected. As I said, we do video-, sound-, and smell-based systems. Where are we going? The technology keeps evolving, and we try and stay at the forefront of that technology.

For instance, in the past all such analytics required the sensor to be stationary. For instance, if you had a camera, it had to be stuck on a pole or a wall somewhere. But what happens when the camera itself is moving? For instance, on a body-worn camera where the person is moving around or on a drone or on a robot that’s walking around. So we have started evolving technologies that’ll work even on those sorts of moving cameras, and we call that “wild AI.” It works in very complex scenes, in moving environments where the sensor itself is moving.

Another example is where we’ve started—we’d initially developed our smell technology for industrial applications, for things like waste-management plants, for things like airport toilets. They clean the toilet every four hours, but it might start smelling after 20 minutes. So the toilet itself can say, “Hey, I’m Toilet Six, come back and clean me again.” It can be used in hospitals where a person might be incontinent and you can say to the nurse, “Please go and help the patient in room 24, address the smelling.” And so on. It can be used for industrial applications of a number of types.

But we also discovered that we could use the same device to smell the breath of a person, and using the breath we can diagnose early-stage lung cancer and breast cancer. Now, that’s not a product we’ve released yet. It is—we are going through the clinical tests and clinical trials that one needs to go through to release this as a medical device, but that’s where the future is. It’s unpredictable. We wouldn’t have imagined 20 years ago that we’d be developing devices for cancer detection, but that’s where we are going.

Christina Cardoza: It’s amazing to see, and I can’t wait to see what else the company comes up with and how you guys continue to transform industries and the future. I want to thank you, Rustom, again for coming onto the podcast; it’s been a great conversation.

And thanks to our listeners. I invite all of our listeners to follow us along on insight.tech as we continue to cover partners like iOmniscient and what they’re doing in this space, as well as follow along with iOmniscient on their website and their social media accounts so that you can see and be a part of some of these technologies and evolutions that are happening. So thank you all again, and until next time this has been “insight.tech Talk.”

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Unlocking New Possibilities with 3D LiDAR

We’re all familiar with the concept of radar. But did you remember that the word is actually an acronym for “radio detection and ranging” that has shed its original uppercase look and become a common noun and a common idea? (As well as a beloved character on the TV show M*A*S*H.) What, then, is the related but more techie-looking “LiDAR”? This one stands for “light detection and ranging,” and it’s not actually a new technology, but it’s been gaining a lot more interest lately, particularly in autonomous vehicles and terrestrial mapping, though its uses go well beyond self-driving cars and archeology.

Recently we spoke with Gerald Becker, VP of Market Development and Alliances at AI-powered 3D LiDAR solution provider Quanergy Solutions. He has seen the technology advance beyond automotive and across many different industries and businesses. He talks about how LiDAR improves operational efficiencies and workflow, benefits of moving from 2D to 3D, and challenges of persuading people to adopt new technologies (Video 1). And maybe one day soon, LiDAR will be so much a part of our lives that we’ll see it in the dictionary as “lidar.”

Video 1. Gerald Becker, VP of Market Development and Alliances at Quanergy, talks about the rise of, and advancements with 3D LiDAR on the “insight.tech Talk.” (Source: insight.tech)

How does LiDAR go beyond autonomous vehicle?

LiDAR has been around for decades, but it wasn’t until the past 10 years or so that we’ve really seen what it can do. Everybody knows about LiDAR being used for automotive—that’s been the holy grail—and robotics and terrestrial mapping, but there are a lot of other applications for it.

At Quanergy, we’ve pivoted and gone after a different market, where we’ve aligned with a who’s who of players from physical security, integration-management platforms, video management, software solutions, cameras, business intelligence, and physical-access control systems. They’ve integrated our sensors into their platforms to provide all kinds of event-to-action workflows. It’s giving end users the ability to explore how to solve old problems in different ways and to get higher levels of accuracy that they’ve never been able to do before—as well as to solve new problems.

I head up the physical-security, smart space, and smart city sectors at Quanergy, and there’s so much 3D LiDAR applicability in those three markets because they’ve always been confined before to using cameras or other types of IoT sensors that are 1D or 2D technologies. The advent of 3D technologies and the integration ecosystem that we’ve developed in the past few years provide so much more flexibility to see beyond two dimensions, to see beyond what’s been the common custom of sensing in this space.

How can that new depth of dimension benefit businesses?

In security, for example, we’re doing some very, very big things, predominantly using radar and camera and video analytics, where our 3D sensors can now provide depth and volume in 360º with centimeter-level accuracy. This increases the TCO advantage compared to all legacy technologies, and decreases the number of false alarms.

In legacy technologies, anytime that there’s movement or anytime an analytic tracks a potential breach, it automatically starts triggering events. That’s a big problem when there are thousands and thousands of alarms just because the analytic doesn’t understand how to decipher that it’s only an animal walking by. Our sensors are able to provide 98% detection, tracking, and classification accuracy in 3D spaces.

From the business-intelligence side, we’re able to provide a higher-level, deeper understanding of what’s going on within a space. Take retail. We can understand where a consumer is going through their journey, what path they’re taking, what products they’re touching, how long the queue lines are for them.

And instead of sticking a camera here, here, here, and stitching them all together, you put in one LiDAR sensor that gives you a full 360º, and you’re able to see that whole space and see how people interact in these spaces. We’re able to provide so many cool outcomes that have just never been able to be done with 2D-sensing technology.

What are some of the challenges that LiDAR is up against in terms of adoption?

I think that with LiDAR, some people may be a little nervous adopting a new technology if it’s out of their comfort zone. When I explain what LiDARs sees, I always revert back to my favorite movie of all time, The Matrix. Remember when Neo saw the ones and zeros dropping from the sky when he saw Agent Smith down the hall? That’s how we see. We don’t see like cameras do, where you could tell that I have on a blue polo shirt. To us, everything looks like a 3D silhouette with depth and volume in 360º.

There is also cost. You have to look at it from a high level. I always use this analogy that I heard when I was young from more senior sales guys—the whole iceberg theory. You can’t just look at the top of the iceberg when comparing what different solutions will cost. A camera may be only a few hundred dollars, while LiDAR may be a few thousand—plus software, et cetera, et cetera.

But the underlying cost is beneath the iceberg, right? What is it going to take to install seven to eight cameras on the one side versus one device? Look at labor; look at the cost of conduit, cable, licensing, the maintenance that’s required to deploy those cameras. So that’s when LiDAR becomes really cost-effective, when you understand the complexity of installation of legacy technology versus new technology in that area.

#LiDAR becomes really cost-effective when you understand the complexity of installation of legacy #technology versus new technology in that area. @quanergy via @insightdottech

How can companies leverage their existing infrastructure for 3D LiDAR?

A layered approach to any solution is probably the best route. There’s not one single technology in the world that can solve all use cases. Is someone trying to sell you on that? Please turn around and run, because it just can’t be done. But when you put the best-of-breed solutions together in your deployment, you’re going to get the best outcomes.

We have a large ecosystem of technology partners that we’ve integrated with. For example, we partner with 2D-imaging technologies: cameras, like your Bosch, your Axis, your Hanwha. If you need to identify something—there’s a bad guy wearing a blue polo shirt that’s potentially going to break through that fence! The camera helps us see that. But when you need to actually detect, track, and classify, that’s when LiDAR opens up new outcomes that you can’t get with just a camera.

Let’s say you use traditional pan-tilt-zoom auto tracking on an embedded camera. The issue with traditional 2D technology and auto tracking is that when Mr. Blue Polo goes behind an object or into another area, the camera doesn’t know what’s happening.

But if you have enough of our lasers shooting throughout the space, seeing up and down aisles, halls, and parking spaces, they’re able to accurately detect the object or person. With our solution, we can tell the camera, “Hey camera, stay focused on this wall. We know the person is behind the wall.” Then when the person comes out from behind the wall, we’re still telling the camera to track Mr. Blue Polo.

The other beautiful thing about the solution is that we provide a mesh architecture. If you have enough LiDARs in a space, as long as the lasers overlap with one another, it creates this massive digital twin. It gives you a flexibility that has never been possible with other technologies. You can literally zoom in and pan around up and down corridors, up and down hallways, other sides of walls, around a tree, around whatever it may be.

Can you talk about some of your customer use cases?

There’s a global data-center company that came to us with a very specific problem. Within a 33-week period of testing at one of their sites, they were generating 178,000 alarms. Now this is by definition a needle-in-the-haystack situation, when I tell you that only two of those alarms were real. Think of the operation to acknowledge an alarm within a security practice: Click. Review. That isn’t it? Delete. Try doing that 178,000 times to find the one time when that disgruntled employee who got fired for something and shouldn’t be at the property at all comes in with a USB drive, plugs into the network, and takes down a billion-dollar organization.

The people at this company knew they had a problem, and they tested everything under the sun—AI, radar, fact-checking technology, underground cable. They finally landed on our solution, and they did a shootout: one of their best sites with our site. Their best site came up with 22,000 alarms; our site generated five actual alarms. It saved them 3,600 hours of pointless investigation work.

Here’s another interesting one. In Florida there are a lot of drawbridges. They go up and they go down, and they’re susceptible to liability issues if people or vehicles can accidentally fall into the waterway in the transition process. Some initial tests were done with our LiDAR solutions positioned on both sides of the bridge to basically track if an object—a person or a vehicle—came into the scene. And if anything did, it could either hold the bridge from going up or notify the bridge tender in the kiosk and say, “Do not let the bridge up.” They had very high success with that POC using LiDAR, and they’re now deploying it across several bridges in Florida.

Tell me more about the ecosystem of partners you work with.

Unlike most LiDAR that is heavily focused on GPU processing with a ton of data that needs to be processed, we’re a little bit different. Our sensors are purpose-built for flow management and security applications; they don’t need to gather and push a ton of data through the pipe. So we have a CPU-based architecture, which means it’s more cost-effective. It’s also highly scalable, but even more so since we align with Intel.

Our partnership with Intel also means that we find out new use cases on a daily basis. Right now we’re exploring brick-and-mortar and warehouse automation with them, where we could provide 3D sensing beyond the traditional way of looking at those types of spaces. The partnership with Intel is really valuable to us as we continue to scale and grow.

How do you anticipate that this space will evolve going forward?

There’s the advent of AI and what’s going on with large learning models. There’s a ton of stuff that’s being done right now with computer vision and understanding much more as far as what’s being cut within the scene in order to understand more generalities that can create different outcomes and tell a different story that ultimately gets you to the end result. Is it a good guy or a bad guy? Is it a good workflow or is it not?

So there’s much more that can be done with LiDAR as we marry it with AI technologies, providing additional outcomes that are just not being done yet. We’re still in the very early stages, but there’s really just a massive opportunity in this space.

We’re past that early phase with LiDAR—the kick-the-tires phase—and there are so many people who are now talking about how it has increased their workflows and provided additional value. So I think, now more so than ever, it’s a time to act and start testing, to start asking the question: What can LiDAR do for me that I haven’t been able to do before? Look at your existing use cases and ask yourself: If I had depth, if I had volume, if I had centimeter-level accuracy—how could that improve my day-to-day workflow, my job, and provide more value to the organization as a whole?

Related Content

To learn more about 3D LiDAR, watch See the Bigger Picture with 3D LiDAR Applications. For the latest innovations from Quanergy, follow them on X/Twitter at @quanergy and LinkedIn.

 

This transcript was edited by Erin Noble, copy editor.

IPCs Speed Time to Market for Medical Device Builders

AI-powered medical devices can make a crucial difference in all phases of care: from diagnostic tools to AI-enhanced digital operating rooms, and postoperative AI analytics that augment patient recovery.

It’s a time of unprecedented opportunity for medical equipment manufacturers. But to capitalize on that opportunity, they need to find innovative ways to shorten development timelines, overcome hardware integration challenges, and meet the medical industry’s stringent certification standards.

To this end, manufacturers are turning to industrial PCs (IPCs) purpose-built for medical AI use cases that can reduce risk and uncertainty.

“In an industry with long product life cycles, medical device makers are understandably wary of attempting to incorporate complex AI technology into their solutions,” says Emily Teng, Associate Director of Product Management at Advantech, a leading IoT intelligent systems and embedded platforms provider. “Building on proven hardware platforms designed for medical AI helps to simplify the process and speed time to market.”

To accommodate a broad range of use cases, Advantech medical platforms are designed for both compliance and customization. This puts equipment manufacturers in a “best of both worlds” situation, because they have access to computing platforms that can be used off-the-shelf, through OEM/ODM, and joint development models.

Development Models for AI Medical Devices

Advantech’s work with two medical solutions providers highlights the benefits of IPCs for device manufacturers—and shows how they can support differing product development models as needed.

In one case, an OR solutions integrator was attempting to develop a new video solution using an ODM approach. But it was concerned about potential liability that a custom hardware design would bring, and struggled with the engineering challenges involved.

To solve these problems, the manufacturer decided to use the Advantech USM-500 computer as the basis for its new solution. Because the USM-500 is designed to meet compliance requirements such as the IEC 60601-1, which specifies electrical safety standards for medical equipment, as well as electromagnetic compatibility (EMC) standards to prevent interference with other devices. This meant that the company didn’t have to go through the time-consuming and costly hardware certification process or take on additional liability. The result was a significantly accelerated rollout: from RFQ to mass production in just six months.

“Manufacturers that want to avoid a lengthy certification process can use our solution ‘as is’ because everything is fully configured and certified for medical uses, from the computing system, to the video capture, and network interface controller cards,” says Matt Wieborg, Solution Architect at Advantech. “But some customers need greater customization, so we designed the IPC with extensive expansion capabilities, which provides a strong underlying foundation with enough flexibility for manufacturers to fine-tune what they need and still make it to production with relative ease.”

For example, another medical device maker planned to build a solution using a video capture card that had high power requirements, and a more interactive user interface.

Advantech’s design team worked with the company’s engineering group to create a tailored design. They built a 10-inch LCD display into the unit’s front bezel to address the UI requirement. To support the power demands of their customer’s preferred video capture card, Advantech adjusted the unit’s I/O accordingly. The result was a performant solution that leveraged the IPC’s core capabilities while providing the customization the manufacturer needed.

When #medical equipment #manufacturers build on tested #computing platforms, they can get their solutions into #clinical settings more efficiently. @Advantech_USA via @insightdottech

Ensuring Stability and Security at the Clinical Edge

When medical equipment manufacturers build on tested computing platforms, they can get their solutions into clinical settings more efficiently. And because these platforms are designed by hardware experts, there are added data security and product longevity benefits as well.

Advantech, for example, has helped its partners enhance cybersecurity and data privacy in several ways. It has introduced device makers to basic hardware industry best practices such as using trusted platform modules and built-in encryption engines to ensure better cybersecurity. By providing high-performance processing at the edge, it has also bolstered privacy by reducing the amount of patient data being sent to the cloud for processing. In addition, Advantech makes it possible to help device makers install security patches and OS updates more easily.

In terms of solution stability, Advantech’s technology partnership with Intel has been of particular benefit.

“Intel’s portfolio excels in edge AI uses cases and covers the entire range of what our medical device manufacturing needs, from low power processing all the way up through server-class computing,” says Wieborg. “In addition, having a reliable platform with longevity is especially important in this sector. We have many buyers who will only consider Intel solutions because of the stability—and the fact that they know Intel will support these products for many years to come.”

Partnerships as the Future of Medical AI

The availability of regulatory-compliant and flexible medical-grade hardware will help more and more medical equipment manufacturers incorporate AI into their offerings. In turn, this will help fuel a wider AI product ecosystem in which partnerships between medical solutions specialists and hardware providers are key.

Given the wide variety of AI implementations in healthcare settings, this is a huge help to the device builders—who need support from hardware specialists with experience building for different configurations, environmental conditions, regulatory requirements, and computing specifications.

But Advantech says that makers of medical PCs will also benefit greatly from these partnerships:

“We feel privileged to work with so many top medical device manufacturers,” says Wieborg. “The voice of the customer is really guiding our product lines; customers come to us with problems, we come up with solutions, and that’s how we’re all going to continue to succeed in the future.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Frame Grabbers Test Automotive Cameras

When driving a vehicle, a lot rides on the ability to see clearly.

No matter what the ambient conditions—harsh glare, low light, or rain—there’s no room for error when navigating around other vehicles, pedestrians, or obstacles on the road. Thanks to advances in technology, vision solutions embedded in advanced driver assistance systems (ADAS) help human drivers detect these objects.

The vision system in an ADAS-equipped vehicle includes a set of cameras that streams real-time video from the road and the inside of the vehicle. A computer captures the frames from this video stream and feeds them to the vision processor to analyze.

Challenges in Automotive Vision Systems

Despite the important role that vision systems play in ADAS, autonomous driving, and electric vehicles, the industry lacks a consistent set of standards to evaluate them. In addition, traditional vision systems struggle to handle environmental conditions well. Given all that’s riding on effective vision in vehicles, automotive cameras need thorough testing in labs and in production before they’re packaged into ADAS solutions. Such testing is a job for frame grabbers, says Po Yuan, CEO and Founder of EyeCloud, a provider of image processing systems.

In a moving vehicle, a computer captures still frames from video to hand over to the vision processor. The frame grabber executes the same functionality in research and development settings and in preproduction testing of cameras. “The camera is a separate module that is being produced and eventually assembled onto a vehicle. But before this happens, it needs to be calibrated, tested, and quality-controlled,” Yuan says.

Frame Grabber Use Cases

A frame grabber evaluates camera functionality in the lab environment by helping test if it can deliver clear images in low light or other edge conditions.

In the production stage, manufacturers also need to calibrate cameras, making sure they’re in focus and provide non-distorted images. Here, too, frame grabbers help. “The manufacturers connect the camera to our frame grabber and assess the images so they can adjust the camera interactively,” Yuan says.

Burn-in testing in factories to see how cameras perform in continuous streaming conditions over a long period of time is yet another use case for frame grabbers. Cameras run for up to 144 hours at a time, and frame grabbers ensure the cameras capture images reliably captured without frame loss. “If there is frame loss, the camera will be disqualified,” Yuan says.

The frame grabber can also help with real-world data collection for algorithm development. In this case, the cameras mount on a car and the frame grabber captures data like signs and pedestrians and bicycles, synchronously. “Road sign detection and pedestrian detection algorithms need huge amounts of data. In the AI world, data is key and our frame grabbers help with data collection,” Yuan says.

Frame Grabber Real-World Requirements

While frame grabbers provide a lot of utility in production phases of automotive cameras, they must also tackle a few challenges.

For one thing, they need to be portable so they can be easily used for data and testing. Second, they need to synchronize with multiple cameras to simulate a real-world vehicle setting. Most ADAS solutions have a few cameras pointed at the road and inward, into the car.

Frame grabbers also need to keep up with advances in automotive camera technology. “Cameras are getting higher in resolution and higher frame rate and depth, all of which demand a higher bandwidth for the frame grabber to work with,” Yuan says.

The ECFG series from EyeCloud meets these requirements with modular circuits that can support 4-16 channels of video at a time. The series also handles higher data bandwidth requirements that are around the corner. “We make sure we understand where the industry is going and design our solution to meet those evolving requirements,” Yuan says.

An #AI-enabled frame grabber can be applied in #robotics, #surveillance, and a variety of other use cases. @eyecloudai via @insightdottech

Future Developments in Automotive Cameras

Part of that future-forward direction lies in other kinds of cameras, including infrared ones that can detect objects even in edge cases. SWIR (short-wave infrared) is also a contender. It’s why a multi-spectral vision system is playing a part and what the ECFG series from EyeCloud can also accommodate.

EyeCloud is also working to make the frame grabber more intelligent with edge AI. “Intel processors’ system-on-a-chip format facilitates edge AI applications because they combine image processing, the neural computer engine, as well as a CPU in one chip,” Yuan says. An AI-enabled frame grabber can be applied in robotics, surveillance, and a variety of other use cases. Instead of routing every frame from video streams to the vision processor, an intelligent frame grabber can selectively pick only the ones with relevant information. The vision processor doesn’t need endless images of the same paved road, for example, but one where a moving pedestrian or animal comes into the frame, might be more useful.

“This is the roadmap that we’re going to take on to make the frame grabber intelligent and also make data collection more relevant with Intel edge AI technology,” Yuan says. “The flexibility and intelligence we can add to this frame grabber with Intel technology makes us really excited about future growth in the market.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.