Addressing the Design Challenges of 5G OpenRAN

The arrival of 5G has captured the attention of industries worldwide, unlocking new possibilities for high-speed connectivity at a massive scale. In sectors like manufacturing and smart cities, for example, 5G enables far-flung facilities to be networked into a unified whole, enabling unprecedented visibility and responsiveness.

But many applications have needs that public 5G networks cannot meet. This is where private 5G networks step into the spotlight. “There is a pressing need for customized infrastructure to fully leverage the capabilities of 5G,” explains Zeljko Loncaric, Market Segment Manager of Infrastructure at congatec, an embedded computer boards and modules provider, pointing out security, real-time reliability, and network flexibility as some of the key requirements.

This growing demand for tailored solutions and the adoption of private 5G networks come at a perfect time, coinciding with the emergence of open standards like OpenRAN. This shift presents a unique opportunity for telecommunications equipment manufacturers (TEMs), who are no longer constrained by markets dominated by a few major players. Instead, OpenRAN’s open interfaces and standards promote vendor diversity—an important strategic focus for TEMs, Loncaric notes.

Opening Up New Possibilities for OpenRAN

Historically, to build 5G solutions that leverage OpenRAN capabilities, TEMs have several hurdles they must overcome. Specifically:

  • Integrating components from various sources while keeping performance high and costs low.
  • Ensuring robust security. This is a particularly pressing concern for TEMs targeting private 5G networks, which often host high-value data.
  • Designing equipment for harsh environments. (The limited range of 5G radios means that equipment is often deployed deep into the field.)
  • Ensuring solutions can scale effectively to meet the demands of diverse deployments.

“There is a pressing need for customized infrastructure to fully leverage the capabilities of #5G.” – Zeljko Loncaric, @congatecAG via @insightdottech

That’s why congatec developed a solution to provide TEMs with a faster path to market. The conga-HPC/sILH platform is designed to pre-integrate the most complex system elements. The solution includes a backhaul connection to the core network, two RF antenna modules, an Intel® Xeon® D processor, a secure Forward Error Correction (FEC) accelerator, and the full FlexRAN software stack.

According to Loncaric, the technology package is suitable for all types of 5G radio access network configurations. With conga-HPC/sILH, TEMs can focus on their core competencies and keep their specific IP in-house, delivering 5G OpenRAN servers with high levels of trust and design security.

The Role of COM-HPC in Building Robust 5G Infrastructure

The heart of the platform is the COM-HPC Server Size D module, which features an Intel Xeon D processor. This combination offers the performance, efficiency, and security features needed for 5G applications. Notably, selected modules support extreme temperature ranges from -40°C to 85°C, enabling OpenRAN servers to be deployed beyond the confines of air-conditioned server rooms.

The modules plug into Intel’s platform carrier board, which provides a robust and flexible foundation for developing 5G infrastructure. For instance, it supports a wide range of interfaces and acceleration technologies, helping TEMs to streamline the design process.

“The carrier board is a highly flexible reference platform that demonstrates the effectiveness of our offering and provides significant support for TEMs. Combined with our COM-HPC Server module, it enables rapid custom builds that require connections and interfaces not typically found in a RAN server,” says Loncaric.

Enabling Security and Flexibility in Private 5G Networks

To overcome the security concerns of 5G, the platform includes Intel® Software Guard Extensions, which enable secure channel setup and communication between 5G control functions. Built-in crypto acceleration reduces the performance impact of full data encryption and enhances the performance of encryption-intensive workloads.

For precise timing, the platform incorporates Synchronous Ethernet (SyncE) and a Digital Phase-Locked Loop (DPLL) oscillator. These technologies are crucial for synchronizing nodes with the 5G infrastructure.

Together, these technologies allow TEMs to significantly reduce their design effort and accelerate time-to-market. The modular nature of the solution also optimizes ROI and sustainability, as systems can be easily scaled and upgraded with a simple module swap. According to Loncaric, this approach can reduce upgrade costs by up to 50% compared to a full system replacement.

Looking Ahead: The Future of Private 5G Networks and OpenRAN

congatec attributes the success of its platform to its partnership with Intel.

“Telecommunications is a really hard market to access—up until around ten years ago, it was more or less impossible,” Loncaric explains. “By partnering with Intel and through initiatives like the O-RAN Alliance, we were able to enter it step by step. Since then, we’ve released several new standards—the latest, based on Intel Xeon D, is a good fit for several niche applications such as campus networks and industrial environments.”

Looking to the future, congatec plans to develop more solutions that will provide TEMs even higher performance. Beyond that, the company intends to continue its focus on open standards and edge computing expertise.

“We believe our commitment to open standards and our extensive experience in edge computing and industrial applications positions us as a key player in 5G technology across multiple market segments. Through continuous innovation and collaboration with industry-leading partners like Intel, we aim to drive the development of next-generation communication networks, ensuring they continue to meet the evolving needs of modern applications,” says Loncaric.

As the 5G market continues to evolve, solutions like the conga-HPC/sILH COM-HPC platform will play a crucial role in enabling TEMs to meet the diverse and rapidly changing demands of 5G OpenRAN deployments. By providing a flexible, integrated, and powerful foundation, this platform empowers TEMs to innovate faster and deliver the next generation of 5G infrastructure.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Modernizing the Factory with the Industrial Edge

Often when you talk about digital transformation and Industry 4.0, the focus is technology. But people are the key to change.

As manufacturers adopt modern technologies, challenges they face usually stem more from the mindset and collaboration of those implementing them rather than from the tools themselves, according to Kelly Switt, Senior Director and Global Head of Intelligent Edge Business Development at Red Hat, provider of enterprise open source software solutions.

Manufacturing Operations Rely on Team Relationships

The reason why manufacturing operations rely so heavily on collaborative and adaptable teams and individuals is because they involve complex processes that require domain expertise, coordination, troubleshooting, and optimization. Shifting from legacy systems to modern, interconnected platforms, for example, requires a corresponding change in mindset.

The technologies and tools implemented within the factory should empower collaboration and productivity by breaking down silos and removing friction between teams.

“Businesses are a formation of people, and how those people operate the business often emulates system design,” explains Switt. “If you have poor collaboration with your IT counterparts or still experience siloed friction in the relationship, it will manifest in your systems—whether it’s a lack of resiliency or the inability to stay on schedule.”

That’s why Red Hat and Intel collaborated on a modern approach to advancing manufacturing operations and teams. The industrial edge platform is a portfolio of enablement technologies, including Red Hat Device Edge, Ansible Automation Platform, and OpenShift. It also features Intel’s cutting-edge hardware and software stack, including Intel® Edge Controls for Industrial, allowing users to create a holistic solution that meets their specific needs.

“If you have poor collaboration with your #IT counterparts or still experience siloed friction in the relationship, it will manifest in your systems.” – Kelly Switt, @RedHat via @insightdottech

Bridging the Gap with Industrial Automation

A key component of the Red Hat industrial edge platform enables automation of previously manual tasks, one of the first steps toward overcoming cultural challenges. Software automation strategies that enable provisioning, configuring, and updating can also provide a common ground for IT and OT teams to collaborate, and free them up for more critical tasks.

“By automating routine tasks, you can free up the capacity of your staff to focus on more critical aspects of modernization,” Switt explains.

The industrial edge platform helps automate tasks, including system development, deployment, management, and maintenance not only on the server compute level but also the device and networking level—allowing for a more autonomous management of infrastructure.

“You can really create a platform-based strategy around how you think about having more autonomous management of the infrastructure that best supports the productivity of your facility,” says Switt.

Once automation is in place, the next step is modernizing the data centers within the factory. These centers tend to house larger, more critical applications that run the manufacturing processes. Modernizing these systems allows for greater agility and faster changes, which are crucial in today’s fast-paced manufacturing environment.

“Modern technology allows you to have applications with more agility, enabling more frequent updates and faster adaptation to changing needs,” Switt explains. “This not only improves productivity but also enhances the collaboration between IT and OT teams.”

The pharmaceutical industry, for example, needs a level of supply chain traceability. Modern technology enables organizations to reduce the time needed to implement changes from six months to a year to just 90 days. This acceleration brings significant value and benefits to management of both the plant or factory and the overall productivity and output of the facility.

In addition, the industrial edge platform delivers a real-time kernel that lowers latency and reduces jitter so applications can run repeatedly with greater reliability.

“Red Hat’s solutions allow you to not only have an autonomous platform but one that is stable, secure, and based on open source so manufacturers can get to an open, interoperable platform with less proprietary hardware,” says Switt.

Future of Manufacturing Enabled by the Industrial Edge

As manufacturers continue to navigate the complexities of Industry 4.0, collaborations like the one between Red Hat and Intel—focused on culture, people, and mindset—is crucial to the success of their efforts.

“Intel is a core collaborator of ours because not only is Intel ubiquitous with running both the public cloud as well as the IT data centers but is, and should continue to be, ubiquitous with running the factory data center or data room facilities,” Switt says.

By breaking down silos, embracing automation, and modernizing infrastructure, manufacturers can unlock the full potential of their operations and pave the way for a more agile, efficient, and innovative future.

“With Red Hat and Intel, we have the technology that enables you to run a better, faster, and more efficient factory. It’s up to manufacturers to decide what their future looks like, how they want to operate, and the level of collaboration and culture change they bring in to do so,” says Switt.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

The Journey to the Network Edge

The advantages of moving to the network edge are clear: greater speed, enhanced security, and improved user experience. But how does a business actually make that move? What capabilities will best fit the bill and how much should it cost? Is there some kind of Platonic ideal solution out there that a company should search for?

We explore the network edge with CK Chou, Product Manager for IT/OT hardware-solution provider CASwell. He talks about difficulties in transitioning to the network edge, the role of AI there, and how old-school technology can point the way to a valuable solution with just a little creative thinking (Video 1).

Video 1. CASwell’s CK Chou talks about the challenges of moving to the edge and the role of network edge devices on the “insight.tech Talk.” (Source: insight.tech) 

Why are businesses moving to the network edge these days?

If we are talking about edge computing, we all know that it is all about handling data right where it is created instead of sending everything to the central server. This means faster response and less internal traffic, so it is perfect for things that need instant reactions, like manufacturing, retail, transportation, financial services, et cetera.

Let me say it in this way: Imagine you are in a self-driving car and something unexpected happens on the road. You need your car to react instantly, because every millisecond counts; you cannot afford a delay waiting for data to travel to a distant server and back. It’s not like waiting for a loading sandbox when you’re using your computer, right? In self-driving scenarios, any delays could mean life or death. This is one example where edge computing comes in to handle data right at the source to make those split-second decisions.

And of course it’s not just about the speed; it’s also about keeping your information safe. If sensitive data like your financial information can be processed locally instead of being sent over the internet to the central server, there’s a lower chance of it being intercepted or hacked. The less your data travels around, the safer it stays.

By processing data on the spot, edge computing helps keep everything running smoothly, even in places where internet connections might be unreliable. In short, edge computing is all about speed, security, and reliability. It brings the power of data processing closer to where it’s needed most—whether it’s in your car or your doctor’s office or on the factory floor.

But moving to the network edge is not always easy. It’s a big step and comes with its own set of challenges. Companies face things like increased complexity in managing systems, higher infrastructure costs, limited processing power, data-management issues, and more. Despite these challenges, the benefits of edge computing are too significant to ignore. It can really boost the infrastructure performance, improve security, and save the overall cost, eventually making it worth the effort to overcome all those hurdles.

What capabilities of network-edge devices will help with business success?

It is a tricky question. If I’m talking about my dream edge device, it needs to be small and compact, also packed with multiple connection options like SNA, Wi-Fi, and 5G for different applications. And it would be nice to have a rack design that could operate in a harsh environment and handle the right range of temperatures if users want to install the equipment in stony cold mountains or hot deserts. Also, offer powerful processing but consume low power. And, of course, the most important thing is that the cost of this all-in-one box needs to be extremely low.

Getting all that in one device sounds perfect, right? But do you really think that would even be possible? The truth is, companies at the edge don’t really need an all-in-one box. What they really need is a device with the right features for their specific environment and application. And that’s what CASwell is all about.

We have a product line that can provide a variety of choices—from basic models to high-end solutions and from IT to OT applications. Whether it’s for a small office, a factory, or a remote location, we have got options designed for different conditions and requirements so companies can easily find the right edge device without paying for features they don’t really need.

What is the role of AI at the network edge?

Nowadays, AI-model training is done in the cloud, due to its need for massive amounts of data and high computational power. But think about how big an AI data center needs to be. Imagine something the size of a football field filled with dozens of big blocks, and each block is packed with hundreds of servers, all linked together and working nonstop on model training.

An AI server like that sounds amazing, but it is too far from our general use cases and not affordable by our customers. Remember: The concept of edge computing is all about handling data right where it is created instead of sending everything to a central server. So if we want to use AI to enhance our edge solutions, we cannot just move the entire AI factory to our server room—unless you are super rich and your server room is the size of a football field.

Instead, we keep the heavy-duty deep learning tasks in a centralized AI center and shift the inference part to the edge. This approach requires much less power and data, making it perfect for edge equipment. We’re already seeing this trend with AI integrated into our everyday devices like mobile phones and AI-enabled PCs. These devices use cloud-trained models to make smart decisions, provide personalized experiences, and enhance user interaction.

CASwell is right now building a new product line for edge-AI servers. It is designed to bring AI capabilities right from the data center to the edge, giving us the power of AI instantly. It puts AI directly in the hands of those who need it, right when they need it.

How does CASwell help businesses address their network edge challenges?

We saw a trend where edge environments were becoming more challenging than we initially expected. More end users were looking for solutions that could work in both IT and light OT environments. They wanted to install edge equipment not just in the office—with air conditioning and on clean, organized racks—but also in environments like warehouses, factory floors, or even just in cabinets without proper airflow. 

CASwell decided to develop an entry-level desktop product—the CAF-0121—built around the Intel Atom® processor, which offers a great balance of performance and power efficiency. The CAF-0121 can handle a wider temperature range, up to something like -20º to 60º from the typical 0º to 40º. This small box can also provide 2.5-gig support to fulfill the basic infrastructure connectivity. Plus, it is compact and fanless, with a passive-cooling design, which is suitable for edge computing applications.

Our goal with this new model was to provide OT-grade specs at an IT-friendly price. This means users could cut down on the resources needed to manage their infrastructure and make deployment much simpler. They could use the same equipment across both IT and OT applications, making it easier to standardize and maintain their technology setup. The approach for the CAF-0121 allows business to adapt to different environments without needing separate solutions for each scenario, so it is really an exciting product.

What were some of the challenges with creating CAF-0121?

The technology around the thermoelectric module—we call it TEM—is what we rely on for CAF-0121. TEM is already a proven solution for cooling overheating components; it is common in things like medical devices, car systems, refrigerators, water coolers, and other equipment that needs quick and accurate temperature control.

These devices work on creating a temperature difference when electric current passes through them, causing one side to heat up and the other side to cool down. The more current we send through, the bigger the temperature difference we get between the two sides.

People normally use the cooling capability of the TEM, but we had a different idea: Why not leverage both the cooling and heating capabilities to help our edge devices operate in a wider temperature range? The overall concept is that by leveraging the heating capability of the TEM we can indirectly expand the operation temperature range of the system to a lower degree. And, conversely, by using the cooling capability it can cool down the system when the internal ambient temperature rises to a certain high level. When the room is getting cold, TEM operates as a heater; when a room is getting hot, TEM operates as a cooler.

With a TEM, we are no longer limited to the operation temperature range of our individual components, allowing us to expand the temperature range of our equipment beyond what the components could typically allow. With the TEM we can push the temperature boundaries and the device can still maintain reliability.

And with this project we have gained some really valuable know-how using an old-school technology as an innovative solution to bring added volume to our products in this highly competitive market. We also want this small success to inspire our R&D team to stay creative and think outside the box, not just stick to the traditional way of doing things.

How does CASwell work with technology partners to make its product line possible?

A solid edge computing device should have just the right processing power, be energy efficient and packed in a compact size, with a variety of connection options, and of course have a competitive price. These are really the basic must-haves for any edge computing device.

That’s why we chose the Intel Atom processor for the CAF-0121 project. With the Atom we can provide the right level of performance and still keep power consumption low. And the Intel LAN controller helps us easily add the support for 2.5-gig Ethernet to this box, ensuring capability with most infrastructure requirements.

The Atom also has built-in instructions that can accelerate IPsec traffic, making it an excellent choice for security-focused applications. Whether you are dealing with data encryption, secure communications, or other security jobs, this processor is up to the challenge.

If we wanted to further enhance the security, Atom is also integrated with BIOS Guard and Boot Guard to provide a hardware root of trust. So we are not just talking about great performance and efficiency, we are delivering a high level of protection for the BIOS and the boot-up process. This level of security is crucial, especially for edge devices that need to handle sensitive information and critical tasks without compromising protection.

Among the various players in this market, only Intel offers a one-stop shop for all these features. Intel doesn’t just provide the hardware but also the driver and firmware support. This level of integration has made the development of the CAF-0121 project so much easier, and it has really shortened our time to market. When you have got the processing power, security features, and even software support all coming from one reliable partner, it certainly streamlines the whole process. It doesn’t just simplify the engineering and development work but also ensures that everything works seamlessly together.

Then the hardware designer—like CASwell—can focus more on optimizing performance and less on troubleshooting capability issues. This is a big win both for us and for our customers, allowing us to deliver high-quality, reliable edge computing solutions faster and very efficiently.

In the end, our goal is very simple: We aim to set a new standard of edge computing equipment and provide flexible edge solutions to help customers tackle challenges from the cloud and through the network and all the way to the intelligent edge.

Related Content

To learn more about the network edge, listen to The Network Edge Advantage: Achieving Business Success and read AI Everywhere—From the Network Edge to the Cloud. For the latest innovations from CASwell, follow them on LinkedIn.

 

This article was edited by Erin Noble, copy editor.

Reverse Proxy Server Advances AI Cybersecurity

AI models rely on constant streams of data to learn and make inferences. That’s what makes them valuable. It’s also what makes them vulnerable. Because AI models are built on data they are exposed to, they are also susceptible to data that has been corrupted, manipulated, or compromised.

Cyberthreats can come from bad actors that fabricate inferences and inject bias into models to disrupt their performance or operation. The same outcome can be produced by Distributed Denial of Service (DDoS) attacks that overwhelm the platforms that models run on (as well as the model itself). These and other threats can subject models and their sensitive data to IP theft, especially if the surrounding infrastructure is not properly secured.

Unfortunately, the rush to implement AI models has resulted in significant security gaps in AI deployment architectures. As companies integrate AI with more business systems and processes, chief information security officers (CISOs) must work to close these gaps and prevent valuable data and IP from being extracted with every inference.

AI Cybersecurity Dilemma for Performance-Seeking CISOs

On a technical level, there is a simple explanation for lack of security in current-generation AI deployments: performance.

AI model computation is a resource-intensive task and, until very recently, was almost exclusively the domain of compute clusters and super computers. That’s no longer the case, with platforms like the octal-core 4th Gen Intel® Xeon® Scalable Processors that power rack servers like the Dell Technologies PowerEdge R760, which is more than capable of efficiently hosting multiple AI model servers simultaneously (Figure 1).

Picture of Dell rack server
Figure 1. Rack servers like the Dell PowerEdge R760 can host multiple high-performance Intel® OpenVINO toolkit model servers simultaneously. (Source: Dell Technologies)

But whether hosted at the edge or in a data center, AI model servers require most if not all of a platform’s resources. This comes at the expense of functions like security, which is also computationally demanding, almost regardless of the deployment paradigm:

  • Deployment Model 1—Host Processor: Deploying both AI model servers and security like firewalls or encryption/decryption on the same processor pits the workloads in a competition for CPU resources, network bandwidth, and memory. This slows response times, increases latency, and degrades performance.
  • Deployment Model 2—Separate Virtual Machines (VMs): Hosting AI models and security in different VMs on the same host processor can introduce unnecessary overhead, architectural complexity, and ultimately impact system scalability and agility.
  • Deployment Model 3—Same VM: With both workload types hosted in the same VM, model servers and security functions can be exposed to the same vulnerabilities. This can exacerbate data breaches, unauthorized access, and service disruptions.

CISOs need new deployment architectures that provide both performance scalability that AI models need as well as ability to protect sensitive data and IP residing within them.

Proxy for AI Model Security on COTS Hardware

An alternative would be to host AI model servers and security workloads on different systems altogether. This provides sufficient resources to avoid unwanted latency or performance degradation in AI tasks while also offering physical separation between inferences, security operations, and the AI models themselves.

The challenge then becomes physical footprint and cost.

Building on a Dell PowerEdge R760 Rack Server featuring a 4th Gen Intel Xeon Scalable Processor, F5 integrated an Intel® Infrastructure Processing Unit (Intel® IPU) Adapter E2100. @F5 via @insightdottech Recognizing the opportunity, F5 Networks, Inc., a global leader in application delivery infrastructure, partnered with Intel and Dell, a leading global OEM company that provides an extensive product portfolio, to develop a solution that addresses the requirements above in a single, commercial-off-the-shelf (COTS) system. Building on a Dell PowerEdge R760 Rack Server featuring a 4th Gen Intel Xeon Scalable Processor, F5 integrated an Intel® Infrastructure Processing Unit (Intel® IPU) Adapter E2100 (Figure 2).

Image of Intel IPU adapter
Figure 2. The Intel® Infrastructure Processing Unit (Intel® IPU) Adapter E2100 offloads security operations from a host processor, freeing resources for other workloads like AI training and inferencing. (Source: Intel)

The Intel IPU Adapter E2100 is an infrastructure acceleration card that delivers 200 GbE bandwidth, x16 PCIe 4.0 lanes, and built-in cryptographic accelerators that combine with an advanced packet processing pipeline to deliver line-rate security. The card’s standard interfaces allow native integration with servers like the PowerEdge R760, and the IPU equips ample compute and memory to host a reverse proxy server like F5’s NGINIX Plus.

NGINX Plus, built on an open-source web server, can be deployed as a reverse proxy server to intercept and decrypt/encrypt traffic going to and from a destination server. This separation helps mitigate DDoS attacks but also means cryptographic operations can take place somewhere other than the AI model server host.

The F5 Networks NGINX Plus reverse proxy server provides SSL/TLS encryption as well as a security air gap between unauthenticated inferences and Intel® OpenVINO toolkit model servers running on the R760. In addition to operating as a reverse proxy server, NGINX Plus provides enterprise-grade features such as security controls, load balancing, content caching, application monitoring and management, and more.

Streamline AI Model Security. Focus on AI Value.

For all the enthusiasm around AI, there hasn’t been much thought given to potential deployment drawbacks. Any company looking to gain a competitive edge must rapidly integrate and deploy AI solutions in its tech stack. But to avoid buyer’s remorse, it must also be aware of security risks that come with AI adoption.

Running security services on a dedicated IPU not only streamlines deployment of secure AI but also enhances DevSecOps pipelines by creating a distinct separation between AI and security development teams.

Maybe we won’t spend too much time worrying about AI security after all.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

The Network Edge Advantage: Achieving Business Success

In today’s rapidly evolving technology landscape, businesses increasingly turn to network edge solutions to meet the demands of real-time data processing, enhanced security, and improved user experiences. But deploying these solutions comes with its own set of challenges, including latency issues, bandwidth constraints, and need for robust infrastructure.

This podcast episode explores the world of network edge computing, and the unique challenges businesses face when deploying these advanced solutions. We discuss the critical features of network edge devices and how AI can help drive efficiency. Additionally, we examine the specific challenges and demands industries encounter and how they can overcome them.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guest: CASwell

Our guest this episode is CK Chou, Product Manager at CASwell, a leading hardware manufacturer for IoT, network, and security apps. CK joined CASwell in 2014 and has since worked to build strong customer relationships by ensuring that CASwell’s solutions meet specific needs and standards.

Podcast Topics

CK answers our questions about:

  • 2:42 – The move to the network edge
  • 6:17 – Network edge devices built for success
  • 11:15 – Moving to AI at the network edge
  • 14:37 – Addressing network edge challenges
  • 17:30 – Overcoming the increased demand
  • 22:37 – Implementing network edge devices
  • 25:32 – Partnering on performance and power

Related Content

To learn more about the network edge, read AI Everywhere—From the Network Edge to the Cloud. For the latest innovations from CASwell, follow them on LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to “insight.tech Talk,” where we explore the latest IoT, AI, edge, and network technology trends and innovations. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re taking on the conversation of the network edge with CK from CASwell. But before we get started, let’s get to know our guest. CK, what can you tell us about yourself and what you do at CASwell?

CK Chou: Hi, Christina; hi, everyone. My name is CK, with over 10 years of experience in CASwell for product management. My main focus has been on serving customers in Europe and the Middle East. Over the years my mission is to build strong relationships with clients across these regions, ensuring that the solutions from CASwell meet their specific needs and standards.

And about CASwell—originally began as a division dedicated to network-security applications. Over time our expertise and focus grew, leading us to branch out and establish ourselves as a standalone company in 2007. Over the years CASwell has placed a strong emphasis on R&D to stay at the forefront of technology and innovation. However, we were not satisfied as only a player for networking, so expanded our business into information and operation applications. I should say that our journey from a small division to an independent company wasn’t just about getting bigger; it was about getting better at what we do.

Nowadays, CASwell is a leading hardware solution provider for IT and OT industry in Taiwan, specializing in design, engineering, and manufacturing of not only networking appliance but also industrial equipment, edge computing device, and advanced edge-AI solutions which can meet the demand for the current, modern applications.

Christina Cardoza: Great, and I’m looking forward to digging into some of that hardware. But before we jump into that, I want to start the conversation trying to understand a little bit more of why companies are moving to the network edge. I like how you said in your introduction: you’re trying to stay at the forefront of technology and innovation and get better at what you do. And I think a lot of businesses are trying to do the same, and they look to CASwell to help them along that journey. But why are they moving to the network edge today, and what challenges are they facing on their journey?

CK Chou: If we are talking about the edge computing, we all know that it is all about handling data right where it is created instead of sending everything to the central server. This means faster response and less internal traffic, which means it is perfect for things that need instant reactions like manufacturing, retail, transportation, financial services, and etcetera.

Let me say it in this way. Imagine you are in a self-driving car and something unexpected happens on the road. You need your car to react instantly because every millisecond counts, okay? You cannot afford a delay waiting for data to travel to a distant server and back. It’s not like waiting for a loading sandbox when you’re using your computer, right? In self-driving scenarios any delays could mean life or death. This is just an example where edge computing comes in, handling data right at the source to make those split-second decisions.

And of course it’s not just about the speed; it’s also about keeping your information safe. If sensitive data like your financial information can be processed locally instead of being sent over the internet to the central server, there’s a lower chance of it being intercepted or hacked. The less your data travels around, the safer it stays.

This kind of localized processing is also super important in other areas like health care—which needs instant diagnostic results—machines in a factory detecting problems. By processing data on the spot, edge computing help keep everything running smoothly, even in places where internet connections might be unreliable. So, in short, edge computing is all about speed, security, and reliability. It brings the power of data processing closer to where it’s needed most—whether it’s in your car or your doctor’s office or on the factory floor.

But from what I hear from some of our customers, moving to the network edge is not always easy. It’s a big step and comes with its own set of challenges. Companies face things like increased complexity in managing systems, higher infrastructure cost, limited processing power, data-management issues, and more. Despite these challenges, the benefits of edge computing are too significant to ignore. It can really boost the infrastructure performance, improve security, and save the overall cost, and eventually making it worth the effort to overcome all those hurdles.

Christina Cardoza: Yeah, absolutely. I can definitely see the need for network edge and edge computing with all the demands of the real-time data processing, like you mentioned—the enhanced security, improving user experiences.

But I feel like a lot of times when we discuss the edge it feels very abstract. We know all of the benefits and why we should be moving there, but how do we move there? Is there a network-edge device, for instance, that is able to help us move to the edge and get all of these benefits? What does that look like?

CK Chou: Challenges that I mentioned earlier make moving to the edge seem expensive and complicated, but if companies can have reliable edge devices integrated, it can provide innovative, dependable, and affordable hardware features to help the companies to overcome these challenges so they can allocate their limited resources and focus more on building and managing their infrastructure, maintaining their data, and improving the security, or training their staff.

That’s why companies need to work closely with the edge-device provider, like CASwell. Our customers can always count on us because we design the right equipment for the right use case and ensure the edge devices are the key for their edge journey and make their transition to the edge smoother and easier. So, at the end of the day, having the right device with the right features are essential, but it’s only with the right partner—like CASwell. We support them from the hardware perspective, allowing companies to focus more on their specialization. Each party plays its own role, enabling companies to truly do more with this in their edge journey.

Christina Cardoza: I know you mentioned obviously it’s important to have the right features and reliable, affordable hardware, and that helps you build and manage infrastructure and maintain that data that’s really important. But can you talk a little bit more about what those features and hardware capabilities look like? When companies are looking for a network-edge device, what type of capabilities are really going to bring them success?

CK Chou: Okay, it is a tricky question for me. If I’m talking about my dream edge device, it needs to be small and compact, also packed with multiple connection options like SNA, Wi-Fi, and 5G for different applications. And it would also be nice to have a rack design that can operate in a harsh environment and handle the right range of temperature if users want to install the equipment in stony cold mountains or hot deserts. Also, offer powerful processing but consume low power. And, of course, the most important thing is the cost for this all-in-one box needs to be extremely low.

Getting all that in one device sounds perfect, right? But do you really think that would even be possible? Okay, I can tell you the truth is, companies at the edge don’t really need an all-in-one box. What they really need is a device with the right features for their specific environment and application, and that’s what CASwell is all about.

We have a product line which can provide a variety of choices, from the basic models to high-end solutions and from IT to OT applications. Whether it’s for a small office, a factory, or a remote location, we have got options designed for different conditions and requirements. So, with the right partner, companies can easily find the right edge device without paying for features they don’t really need.

Moving to the edge computing certainly costs a lot, so we need to do it smart and efficient. The idea is to ensure that every edge player can get exactly what they need to optimize their operations and stay ahead of this game. So, sorry that there’s no certain answer for your question here. In my opinion, if an edge device can offer the right features, right capabilities with an affordable cost for the specific use case, then it’s just a good edge device that we are looking for.

Christina Cardoza: Yeah, absolutely. No, I love that businesses or companies, they don’t necessarily need an all-in-one box. I think so many times the businesses are focused on finding something that is cost effective that tries to meet all their needs, and they sort of lose sight of what their needs actually are and how a device can help them and the benefits in the long run. So, that’s definitely great, and I want to get into how partnerships work with CASwell, as well as the different product lines that you do have a little bit deeper.

But before we get there I’m a little curious, because obviously when we talk about edge today, AI is so closely related to it. AI at the edge is a term that’s going around these days, and so I’m curious what the role here is at the network edge, especially when we’re talking about network-edge devices.

CK Chou: We know that nowadays AI-model training is done in the cloud due to its need for massive amounts of data and high computational power. If you do a quick search online, you’ll find lots of pictures showing how an AI factory or AI data center need to be. Imagine something the size of a football field and filled with dozens of big blocks, and each block is packed with hundreds of servers, all linked together working nonstop on model training.

I agree that such an AI server sounds amazing, but this is too far from our general use case and not is able to be afforded by our customers. As we talked about earlier, the concept of edge computing is all about handling data right where it is created instead of sending everything to a central server. So, if we want to use AI to enhance our edge solutions, we cannot just move the entire AI factory to our server room, unless you are super rich and your server room is the size of a football field.

Instead, we keep the heavy-duty, deep learning tasks in a centralized AI center and shift the inference part to the edge. This approach requires much less power and data, making it perfect for edge equipment. We’re already seeing this trend with AI integrated into our everyday devices, like mobile phones and AI-enabled PCs. These device use cloud-trained models to make smart decisions and provide our personalized experiences and enhance user interaction.

Building on this trend, edge-AI servers are coming into the picture of CASwell by integrating with the general computability; we often use a GPU engine here. This edge server can handle the basic AI calculation on top of our existing hardware. This means faster decision-making and the ability to use AI-driven insights in real time, whether it’s for cybersecurity, small factories, or other edge applications.

CASwell is now building a new product line for edge-AI servers designed to bring AI capabilities right from the data center to the edge, giving us the power of AI instantly, and it puts AI directly in the hands of those who need it and right when they need it.

Christina Cardoza: So, tell me a little bit more about that product line or the other products that CASwell offers. You mentioned that you have a whole suite of tools to help businesses depending on what their needs are, their demands, and what they’re trying to get. So, how is CASwell helping these businesses address their network-edge challenges and demands?

CK Chou: I can introduce a model, CAF-0121. The CAF-0121 is an interesting entry-level desktop product from CASwell, built around Intel’s new generation Atom® processor, which offers a great balance of performance and power efficiency. This small box also can provide 2.5 gig support to fulfill the basic infrastructure connectivity, plus its compact and fanless passive-cooling design, which is suitable for edge computing applications.

But we can see a trend where the edge environments are becoming more challenging than we initially expected. End users want to install edge equipment not just in the office space with air conditioning or on clean, organized racks, but also in OT environments like a warehouse, factory floors, and even cabinets without proper airflow. The line between IT and OT is becoming more broad, and more users are looking for solutions that can work in both IT and light OT environments.

As a compromise, CASwell decided to develop this CAF-0121 that can handle a wider temperature range from the typical 0º–40º up to something like -20º–60º. Our goal with this new model is to provide OT-grade specs at an IT-friendly price. This means users can cut down on the resources needed to manage their infrastructure and make deployment much simpler. They can use the same equipment across both IT and OT applications, making it easier to standardize and maintain their technology setup. So the approach for CAF-0121 allows business to adapt to different environments without needing separate solutions for each scenario, which is really an exciting product.

Christina Cardoza: Yeah, that’s great that you developed the CAF-0121 to help businesses in all of their needs. It occurs to me as we’re talking about this, the different temperature ranges that they need to meet, the cost ranges, that not only are businesses having challenges, but sometimes it can be challenging for partners like CASwell to create these solutions that meet their demand.

So, I’m just curious if there’s any insight that you can provide when developing this product, if you guys had any challenges to meet all of these demands and how you were able to overcome them?

CK Chou: The technology around the thermoelectric module—we call it TEM—is the one we are relying on for CAF-0121. TEM is already a proven solution for cooling overheating components. It is common in things like medical devices, car systems, refrigerators, water coolers, and other equipment that needs quick and accurate temperature control.

These slim devices work on creating a temperature difference when electric current passes through them, causing one side to heat up and the other side cool down. The more current we send through, the bigger the temperature difference we get between the two sides. And of course TEM does not run on its own. It is controlled by a microcontroller and the thermal sensor that monitors the temperature inside the device. The firmware that we have programmed into the microcontroller takes those temperature readings and decides when to turn the TEM on and how much current we should send through.

We have gone through countless trials and adjustment with the firmware settings to ensure our equipment stays in the ideal temperature range. And we also had to watch out about the condensation reaction, because if a TEM cools down too quickly, it can cause moisture to form on the module surface. And if the moisture gets onto the circuit board, it could cause serious damage. So an appropriate liquid isolation solution between moisture and a circuit board is also necessary.

While people are normally using the cooling capability of the TEM, we had a different idea of why not leverage both the cooling and heating capability to help our edge device to operate in a wider temperature range? So the overall concept is that by leveraging the heating capability of the TEM, we can indirectly expand the operation temperature range of the system to a lower degree. And, conversely, by using the cooling capability it can cool down the system when the internal ambient temperature rises to a certain high level.

Let me say it in a simple way. When the room is getting cold, TEM operates as a heater; and when a room is getting hot, TEM operates as a cooler. With a TEM, we are no longer limited to the operation temperature range of the individual components we have selected. It helps us bridge the gap, allows us to expand the temperature range of our equipment beyond what the components could typically allow. This means we can push the temperature boundaries by using the TEM and the device can still maintain reliability.

And some people might think, why don’t we just use industrial-grade components that support a wider temperature range and make our life easier? Reality is those wide-temp components can sometimes cost twice as much as standard commercial ones, plus the general chassis designed for this case is usually large and heavy. And then of course the most important reason is if we build our equipment just like everyone else, why would customers choose us over the competition? If that is the case, CAF-0121 would just end up being another costly device with bulky thermal fans designed to support wide temperature ranges, and this is not what we want.

That’s why we have put a lot of effort into studying the characteristics of the TEM more closely and focusing on selecting the right thermal-conductivity materials, fine-tuning our firmware settings, and testing our device in temperature-control chambers day and night. Our goal is to redefine what edge computing hardware can be by offering solutions that are adaptable to various temperature environments, compact and lightweight, and also still being competitively priced.

Christina Cardoza: Yeah, it’s amazing to hear those different wide ranges of temperature environments you were mentioning in cars and refrigerators, so I can see the importance of making sure that it’s consistently reliable and it provides that performance.

So, do you have any customers that have actually been using CAF-0121 and anything you can share with how they’re using it or in what type of solutions it is in?

CK Chou: This box is going to mass production in October this year, which is the next month, and we have already got a few thousand purchase orders from a major European customer focused on cybersecurity applications and planning to use this device in small office, warehouse, and possibly outdoor cabinets for electric-vehicle charging stations that need wider temperature support. This really highlights the advantage of CAF-0121. The customer can use it across both IT and OT applications without needing separate solutions for different operation temperature conditions, and of course saving customers from having to spend extra money.

We also sent samples to around seven to eight potential customers across various industries here, including cybersecurity, SD-WAN, manufacturing, and telecom companies for instant traffic management. The feedback has been fantastic. Everyone loves the competitive price, which makes our device a great deal. And also the compact size is another big win, because it can fit into tight spaces and helps lower our shipping cost. Also, reduces the carbon footprint.

You know, in today’s market, pricing is a huge factor. We need to do cost-effective solutions but cannot compromise on performance and flexibility. So it’s clear that our approach is hitting the mark for customers who need the reliable and scalable edge solutions that don’t break their bank. The excitement we are seeing from these industries really proves that we are on the right track, and CAF-0121 is exactly the kind of solution that can make their needs.

Christina Cardoza: I can definitely see why the solution needs to be smart and compact, but then also fast and reliable, high performance. So, I’m curious how you actually make that happen. And I should mention “insight.tech Talk” and insight.tech as a whole, we are sponsored by Intel, but I know Intel has a lot of processors that make these devices possible, that make them be able to run fast in these different environments and in these small form factors. So, I want to hear a little bit more about how you work with technology partners like Intel in making your product line possible.

CK Chou: As we discussed earlier, a solid edge computing device should have just the right processing power packed in a compact size, a variety of connection options, energy efficient, and of course a competitive price. These are really the basic must-haves for any edge computing device.

That’s why we have chosen the Intel Atom processor for this project. With the Atom we can provide the right level of performance and still keep power consumption low. And also thanks to Intel LAN controller that helps us easily add the support for 2.5 gig Ethernet to this box to ensure the capability with most infrastructure requirements and more. The Atom has built-in instructions that can accelerate IPsec traffic, making it an excellent choice for security-focused applications. So, whether you are dealing with data encryption, secure communications, or other security jobs, this processor is up to the challenge.

And if we wanted to further enhance the security, Atom is also integrated with BIOS Guard and Boot Guard to provide a hardware root of trust. With these two guards we are not just talking about great performance and efficiency, we are delivering a high level of protection for the BIOS and the boot-up process. This level of security is crucial, especially for edge devices that need to handle sensitive information and critical tasks without compromising protection.

I can say that only Intel offers a one-stop shop for all these features among the various players in this market. They don’t just provide the hardware, but also the driver and firmware support. This level of integration has made the development of the CAF-0121 project so much easier, and it has really shortened our time-to-market. When you have got the processing power, security features, and even software support all coming from one reliable partner, Intel, it certainly streamlines the whole process. This not just simplifies the engineering and development work but also ensures everything works seamlessly together.

So, with Intel’s comprehensive support, the hardware designer—like CASwell—can focus more on optimizing performance and less on troubleshooting capability issues. This is a big win for both us and our customers, allowing us to deliver high-quality, reliable edge computing solutions faster and efficiently.

Christina Cardoza: Absolutely; that’s great to hear. And I’m sure—we kept talking about in this conversation making things more cost effective, more affordable, so I’m sure being able to leverage the technology expertise or the technology processor and other elements from a partner like Intel, that helps you be able to focus on your sweet spot and not have to build things from scratch and make things more expensive than they need to be. So, great to hear how you’re using all of that different technology.

It’s been a great conversation. You’ve really been able to take a technical topic and make it more digestible and understandable. Unfortunately, we are running out of time, but before we go I just want to throw it back to you one last time, if you have any final thoughts or key takeaways you want to leave our listeners with today.

CK Chou: I started working at CASwell 10 years ago, and things were pretty different back then. At that time most of the processing power was centralized. Companies were all about making their server super powerful, giving them the fast internet connections for gathering all the data from the edge. Servers were packed with multiple features to handle every use case you could imagine.

Times have changed. It’s all about instant processing and real-time AI calculations. Businesses need to make quick decisions right at the source of the data instead of sending everything back to the central server. That’s why edge computing has become such a big deal. It lets companies process data on the spot without any delay.

But when all the network players are shifting toward edge solutions, the real challenge is how do we make our equipment different and better than everyone else? So this project, CAF-0121, we have gained some really valuable know-how using an old-school technology as an innovative thermal solution for edge equipment and tried to bring added volume to our products in this highly competitive market. We also want this small success to inspire our R&D team to stay creative and think outside the box, and not just stick to the traditional way of doing things.

Also, thanks to the support from Intel about their edge solutions, including edge-optimized processors—which build in deep learning–inference capabilities—various LAN options for different connectivity needs; and of course including all the documents for integration, drivers, and firmware support. This collaboration has really helped us push our designs to the next level.

Finally, our goal is very simple: aiming to set a new standard of edge computing equipment and providing flexible edge solutions to help customers tackle challenges from the cloud and through the network and all the way to the intelligent edge.

Christina Cardoza: Well, I can’t wait to see what else CASwell does in this space—as well as the CAF-0121 when that comes—different market solutions that companies are going to be leveraging this for. I invite all of our listeners to visit the CASwell website, contact them, see how they can help you in all of your edge and network-edge needs. As well as visit insight.tech as we continue to cover partners like CASwell and how they’re innovating in this space.

So, I want to thank you again for joining us today, CK, as well as our listeners for tuning in. Until next time, this has been “insight.tech Talk.”

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Intel® Xeon® 6 Processors Power AI Everywhere

Organizations worldwide deploy AI to increase operational efficiencies and increase their competitive standing in the market. We talk to Intel’s Mike Masci, Vice President of Product Management, Network & Edge Group, and Ryan Tabrah, Vice President & General Manager, Intel Xeon and Compute Products, about the new Intel® Xeon® 6 Processors. Mike and Ryan discuss key advancements that power the seamless and scalable infrastructure required for running AI everywhere—from the data center to the edge—in a more sustainable way.

Why is the launch of the Intel Xeon 6 Processors so important to Intel, your partners, and customers?

Ryan Tabrah: The launch is a culmination of many things, including getting back to our roots of delivering technology starting from the fabrication process to enable the AI data center of the future. I think Intel Xeon 6 hits at a perfect time for our customers to continue to innovate with their solutions and build out their data centers in a way that wasn’t possible before. With Intel Xeon 6 processors, E-cores are optimized for the best performance per watt, while the P-cores bring the best per-core performance for compute-intensive workloads that are pervasive in the data centers of today.

Mike Masci: We see Xeon 6 not just as another upgrade, but as a necessity for the AI-driven compute infrastructure. The existing data center does not have the performance per-watt characteristics that allow data to scale for the needs of an AI-driven era. So whether it be networks needing to process huge amounts of data from edge AI to cloud AI, the these processors do so in a more efficient and performant way. And within a data center, it enables the infrastructure to support the performance needs of AI while being able to scale linearly.

The consistency of the Xeon 6 platform from edge to cloud and the fact that it can really scale from the very high end to more cost- and power-focused, lower-end products is what developers want. They want an extremely seamless experience where there is no need to mix and match different architectures and systems, because anything that slows them down or creates friction effectively is less time spent on developing AI technology.

Intel Xeon 6 is the first Intel Xeon with efficient cores and performance cores. What are some examples of their different workloads and relevant use cases?

Mike Masci: First, efficient cores are designed and built for data center class workloads and are highly performant at optimized density and power levels. This is a huge advantage for our customers in terms of composability and the ability to partition the right product for the right workload in the right location without having to incur complexity and expense of both managing and deploying.

It’s becoming the norm to deploy the same type of workloads at the network edge that are running deep into the data center. People want the same infrastructure back and forth, so it enables them to deploy faster and easier, and save money in the long run.

The most important workloads are cloud native. And that’s where the Intel Xeon 6 E-cores shine. As we think about use cases that take advantage of that, on the network and edge side, the 5G wireless core is one of our most important segments. Where in prior generations it was fixed-function, proprietary hardware, these companies have adopted the principles behind NFV (Network Functions Virtualization) and SDN (Software Defined Networking) and are now moving toward cloud-native technology. This is where the multi-thread performance per-watt optimized piece of Intel Xeon 6 processors is extremely important.

As we look at Intel Xeon 6 with P-cores for other edge applications, customers are very excited about Intel® Advanced Matrix Extensions (Intel® AMX) technology. Specifically, its specialized vector ISA instructions, inherent in the performance cores, allow them to do lightweight inference on the edge where you might not have the power budget for large-scale GPUs that are typical of training clusters. And the beauty of AMX is it’s seamless from a software developer standpoint, and with tools like OpenVINO and our AI Suites, they can take advantage of AMX without having to know how to program to a specific ISA.

Ryan Tabrah: The reality is that, especially at the edge, customers can’t put in some of the more power-hungry or space-hungry accelerators, and so you fall back on the more dense solutions that are already integrated into the Intel Xeon 6 performance core family.

Video is another good use case example. You don’t make money until you can effortlessly scale and pull videos out and push them to the end user. That’s one reason why we focused on the rack consolidation ability in taking a video workload. It’s something like three-to-one rack consolidation over previous generations for the same amount of videos that you can stream at the same performance. It’s better performance at a better energy efficiency in your data center, being able to serve more clients with fewer machines and greater density. And that same infrastructure can then be pushed out to your 5G networks, to the edge of your network where you’re caching videos and deploying them to end users.

Can you talk about the Intel Xeon 6 in the context of a specific vertical and use case?

Mike Masci: Take healthcare, where you need a massive amount of data to train medical image models. In order to have actionable data and insights, you need to train the model in the cloud and run it effectively at the edge. You need to run things like RAG (Retrieval Augmented Generation) to make sure the model is doing what it’s supposed to do, especially in the domain of assisting with diagnosis, for example. So what happens when you need to retrain the model? Edge machines will send more data to the cloud, where it gets retrained, and then has to get proliferated back to those edge machines. That whole process for a developer in DevOps and MLOps is an entire discipline, and it’s probably the most important discipline of AI today.

We think that the real value of AI is going to be meaningfully unlocked when you can have trained models, then you can deploy them at the edge, you can then have the edge refeed the models to get trained in the cloud. And having them on a scalable system matters a lot to developers.

Ryan Tabrah: Also, healthcare facilities around the world have a lot of older code, older applications running on kernels that they don’t want to upgrade or do any work. They want to be able to move those workloads, maybe even containerize them, put them on a system they know will just run and they don’t have to touch a thing. We enable them with open-source tools to update the parts of their infrastructure, and new data centers to bring the future into, and connect with, their older application base.

And that’s where the magic really happens, that someone doesn’t fundamentally have to start from ground zero. Healthcare institutions have all this old data, old applications, and then they’re being pushed to go do all these new things. And that’s back to Mike’s earlier comment that just having a consistent platform underneath from your edge to the actual cloud where you’re doing your development to even to your PC, they just don’t have to worry about it.

What are the sustainability aspects that Xeon 6 can bring to your customers?

Mike Masci: The performance-per-watt improvements across some of our network and edge workloads is clear. It’s 3x performance per watt versus 2nd Gen Intel® Xeon® Scalable Processors. Simply translated, if you get 3X performance per watt, effectively you can reduce the number of servers that you need by one third. That doesn’t just save you CPU power, but it saves you the power of the entire system, whether it be the switches or the power supply of the rack itself or any of the peripherals around it.

And it’s our mandate as Intel to drive that type of sustainability mechanism, because in large part the CPU performance per watt dictates the choices that people make in terms of deploying overall hardware.

A great example is the work we’ve done with Ericsson, a leading ISV provider in the 5G core. In their own real-world testing on UPF, which is the user plane function of the 5G Core, they had 2.7x better performance per watt versus the previous generation. Even more, in the control plane with 2 million subscribers, Ericsson supported the same number of subscribers with 60% less power versus prior generation. This comes back to the performance per watt and sustainability. But it is also about significant OpEx saving and doing a lot of good for the world as well. With Ericsson, we are proving it’s not just possible, but it’s happening in reality today.

In this domain we have our infrastructure power manager, which allows for dynamically programming the CPU power and performance based on actual usage. For example, when the load is low, the CPUs power themselves down. And underlying that is the entirety of the product line has huge improvements in terms of what we would call load line performance. Most servers today are not run at full utilization all the time. Intel CPUs like the Intel Xeon 6 do a great job of powering down to align with lower utilization scenarios, which again lowers overall power need—improving platform sustainability.

This seems fundamental, but it’s harder to do than you would think. You need to optimize at an operating-system level to be able to take advantage of those power states. You need to make sure that you have the right quality of service, SLA, and uptime, which is a huge deal.

Ryan Tabrah: The efforts we make across the board—in our fabrication, our validation labs, and our supply chain that feeds all our manufacturing—demonstrates our leadership in sustainability. When a customer knows they’re using Intel silicon, they know that when it was born or tested or validated or created, it was done in the most sustainable way. We’re also continuing to drive leadership in different parts of the world around reuse of water and other things that give back to the environment as we build products.

Intel Xeon 6 offers our customers the opportunity to meet their sustainability goals as well. With the high core counts and efficiency that Intel Xeon 6 brings, our customers can look to replace aging servers in their data center and consolidate to fewer servers that require less energy and floor space.

Let’s touch on data security and Intel Xeon 6 enhancements that make it easier for developers to build more secure solutions.

Mike Masci: As we look at security enhancements, which is paramount, especially on the network and edge, bringing our SGX and TDX technologies was a big addition. But technology to maturity in terms of security ecosystem is extremely important for customers, especially in an AI-driven world. You need to have model security. You need to be able to have secure enclaves if you’re going to run multi-tenancy, for example, which is becoming extremely important in a cloud-native-driven world. And overall, we really see that maturity of security technologies on Intel Xeon 6 being a differentiator.

Ryan Tabrah: We built Intel Xeon 6 and the platform with security as the foundational building block from the ground up. It’s what we’ve been doing for several generations of Xeon, and we’re making confidential computing as easy and fundamental as possible in the partner ecosystem. With Intel Xeon 6 we are introducing new advances in quantum resistance and platform hardening to enable customers to meet their business goals with security, privacy, and compliance.

Is there anything that you’d like to add in closing?

Mike Masci: Intel Xeon 6 is in a position that’s necessary for AI at the edge and in the network. And we think the idea of making an easy, frictionless platform that also serves multiple workloads easily with composability, is a home run. To me that is the key message of Intel Xeon 6. It’s seamless and scalable so that you can have the same application running on the edge that you have in the data center and without worrying about what hardware it’s running on.

Ryan Tabrah: I agree. Especially in different environments and areas where people are just fundamentally running out of power in their data centers, whether it’s just because they can’t build them fast enough or there are new restrictions and clean energy requirements. We have the solutions in place from their edge to their data centers that just make it super easy for them to see the benefits.

And the best validation, I think, is that it is the feedback from the customers. They want more of it. They want to do more with us. They want to help us not only ramp up the processors as quickly as possible, but then build the next generation as quickly as possible, too. Because they’re excited that Intel is taking a leadership position in key critical parts of telco, edge buildout, infrastructure buildout, and data center, and we are excited to be leading with them.

 

Edited by Christina Cardoza, Editorial Director for insight.tech.

Technology Partnerships Pave the Way to Business Solutions

Many enterprises are eager to adopt the latest technologies, which can help them supercharge efficiency and light the way to better products and services. Innovative solutions are emerging rapidly, offering early adopters an attractive competitive edge. Yet deploying fully integrated solutions is so complicated and time-consuming that many organizations give up after initial trials.

Working with an experienced technology partner like a solutions aggregator can ease frustrations of technology adoption and pave the way to successful deployment. A knowledgeable aggregator can offer end-to-end help specifically designed to accommodate a company’s existing infrastructure and future goals. Some can open the door to a worldwide network of partners and systems integrators. By overseeing the entire process of solution selection, integration, deployment, and scaling, an expert aggregator can proactively remove stumbling blocks and enable companies to get the right solutions up and running quickly at locations across the globe. 

Technology Integration Roadblocks

Enterprises struggle to incorporate new technology for several reasons. Solutions usually require a mix of hardware and software components that must work together seamlessly and fit—or be made to fit—existing infrastructure. Since many large companies use different combinations of legacy and newer technology in different locations, assessing interoperability is a complex process. Multinational firms must also consider regional technology standards and regulations.

“If you’re an IT leader with dozens of objectives on your desk, the last thing you want to do is become a general contractor for every solution,” says Matt Powers, Vice President of Technology and Support Services at Wesco, a leading global supply chain solutions provider. “Engineering the design and choosing multiple contractors for each project becomes an elaborate exercise.”

And during that exercise, the connective infrastructure may shift, adding further complications. “You cannot imagine how fast solutions are evolving,” Powers says. “Innovations are constantly changing the interdependencies between technologies.”

Another hurdle is scaling. Companies often test potential solutions with encouraging results, only to be disappointed when they try to extend deployments.

“You see that a lot, especially with IoT solutions,” Powers says. “Companies will tell us, ‘We’re running our proof of concept (POC) and seeing the results and value we want, but now how do we scale this solution across our global enterprise?’ This is a major challenge for global customers as they need to access and localize technology for different regions. Additionally, they need to identify and work with deployment partners, such as integrators and contractors, to ensure the solution is implemented effectively.”

Technology Partnerships Deliver Innovative Solutions

With more than 100 years of experience as a solutions aggregator and distributor, Wesco can help a wide range of enterprises—including manufacturers, utilities, data centers, retail, and hospitality companies—avoid implementation problems and deploy the right solutions efficiently. The process begins with obtaining a thorough understanding of an organization’s needs.

“What we do differently from other companies is work very closely with stakeholders to understand their particular challenges and assess their opportunities for adding efficiencies or gaining return on their investments,” Powers says. “Once we do that, we can help lead them to the right ecosystem of solution providers and integrators.” To this point, Wesco’s vetted partner network includes more than 50,000 suppliers of hardware, software, and cloud solutions, and integrator partners across the globe.

“What we do differently from other companies is work very closely with stakeholders to understand their particular challenges and assess their opportunities for adding efficiencies or gaining return on their investments.” – Matt Powers @wescocorp via @insightdottech

“The number-one quality we look for in technology partners is their capacity for innovation,” Powers says. “Intel brings us a wide breadth of leading technologies, and the open architecture of its products allow our solutions providers and independent software vendors (ISVs) to develop platforms a variety of end users can access.”

Technology Integration: A Win-Win for Customers and Providers

Wesco strives to be a trusted advisor—suggesting the components, solutions, and partners that work best for each company’s unique environment. WaitTime, an ISV that builds crowd analytics solutions, is one example of how Wesco deploys complete solutions for its customers—from the network edge to the cloud.

WaitTime software applies Edge AI to computer vision cameras, providing information like capacity, crowd density, and shopper insights to venue operators. The software—powered by Intel—is optimized to process data on-site and provide alerts in near-real time.

With WaitTime, companies can catch and solve crowd problems sooner, pre-empting potential hazards and dispatching employees to chokepoints before problems occur. They can also learn where guests or shoppers spend their time, or which areas would benefit from wider pathways, better wayfinding, or other improvements. Making these changes can lead to higher revenue at shops or concession stands.

While using WaitTime is simple to use once it’s set up, deployment involves far more than installing the platform. “The software is one piece of the technology. We can bring the other hardware and installation partners together to build an end-to-end solution,” Powers says.

What kind of providers? That depends on the organization.

Companies may or may not be able to upgrade existing security cameras with computer vision. And they must find networks and hardware that can reliably transmit and process enormous volumes of information while meeting all local security and privacy regulations.

These are just a few pieces of the puzzle that organizations must connect before deployment. Wesco can help them make sound decisions and select the right contractors and systems integrators to build, harmonize, and scale all elements of the solution according to their needs.

Technology and Experience Accelerate Business Success

As technology change accelerates, cutting-edge solutions become increasingly important to success, Powers says. “Innovation is rippling across industries quickly, and competition is not slowing down. By understanding how new technologies can help the business and how to deploy them at scale, enterprises can continue to thrive as new capabilities emerge.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

3D LiDAR Delivers Spatial Intelligence

Picture the last time you were at a concert or a similar large event in the heart of a busy city. You probably struggled to find parking or went through security clearance hassles at the gate. And it was not easy to battle lines at the concession stand or the restroom. Eliminating these annoyances would dramatically improve visitor experiences. It would help event organizers, too.

It’s why airports, city governments, and entertainment venues bet on LiDAR (Light Detection and Ranging), a pulsed laser-based technology specially suited to delivering spatial intelligence. Using LiDAR delivers information not just on the numbers of people and vehicles but their flow and interactions. This means that organizers can maintain security and spot and alleviate bottlenecks in real time.

Advantages of 3D LiDAR

“At large infrastructure sites and events, security and crowd management are not easy, but they’re jobs LiDAR is especially good at,” says Raul Bravo, President and Founder of Outsight, leader in 3D LiDAR solutions.

“While most people might think of CCTV and IP video cameras when it comes to monitoring devices, their two-dimensional capabilities are limited for tracking a three-dimensional physical world,” Bravo says. “Unlike traditional computer vision, LiDAR can’t tell if a person is wearing a red shirt or green, but it knows that person’s speed or location—while delivering data as a 3D capture.”

Because of LiDAR’s capabilities, digital twins have been using the technology to obtain data about the physical world for a while now. “What is emerging is using LiDAR technology to not only map the static physical world but to digitize the real-time movement of people and vehicles,” Bravo says.

Also reassuring to organizations is that LiDAR is a natively anonymous solution. Because LiDAR does not capture images but only the distances of things, privacy is baked in by definition. Monitoring crowds without capturing people’s pictures and maintaining privacy is key to meeting a host of governmental regulations.

Because of LiDAR’s capabilities, #DigitalTwins have been using the #technology to obtain #data about the physical world for a while now. @Outsight_tech via @insightdottech

3D LiDAR Data Processing Challenges

While LiDAR has many advantages, processing the resulting data is not easy. Plugging 3D spatial intelligence data into traditional computer vision techniques delivers poor outcomes. Instead, “you have to create specific algorithms and techniques that tackle this specific kind of problem,” Bravo says.

Also challenging is the sheer volume of data that LiDAR generates. “When we deploy LiDAR at some of the biggest airports in the world, we have hundreds of LiDAR units at the same time,” Bravo says. “The data from each is the equivalent of a hundred people streaming Netflix.”

The diversity of designs, models, and manufacturers in the LiDAR space is also a problem. The Outsight platform solves for all these challenges and works with any LiDAR manufacturer or model. The solution develops living digital twins, feeding information about the physical world at a rapid-enough clip—20 times per second—to deliver insights in real time. These insights can route to the right person and take the form of alarm alerts.

Visual-Spatial Intelligence Use Cases

The problems that Outsight can help resolve apply to smart cities, transportation hubs, and international events. If too many people are crowded in airport baggage check-in areas, the solution can alert officials downstream of potential traffic jams in the security lines, which they can then staff for.

For example, the city of Bellevue, Washington, uses Outsight’s LiDAR solution to detect problems with near-miss situations at intersections, when vehicles get too close to bicycles or pedestrians. LiDAR was especially well-equipped to capture such incidents at night. The information has helped the city take more proactive measures such as clearer lane markings to meet its goal of eliminating traffic fatalities and severe injuries by 2030. Outsight LiDAR can also help smart cities address traffic flow problems in real time. For example, if a vehicle uses the wrong lane for merging, a flashing light can alert the driver to remedy the mistake.

When managing the physical flow of people and vehicles on a massive scale, you have to ensure a smooth visitor experience and operational excellence. Key performance indicators like the length of a ticketing line and time spent in one will matter.

At the 2024 Olympics in Paris, Outsight’s LiDAR solution helped with security and crowd management. Outsight was again pressed into service at Tomorrowland, one of the world’s largest music festivals, held annually in Belgium, which attracts hundreds of thousands of attendees.

The Technology Infrastructure for Spatial Intelligence

Spatial intelligence is about digitizing the physical world and creating insights out of it. Achieving this requires processing power that can handle large amounts of data with efficiency. Outsight depends on Intel products and technologies to deploy its solutions at scale.

“You need specific and highly efficient software algorithms that use CPU-based and not GPU-based solutions for energy efficiency,” Bravo says, which is what Intel CPUs deliver.

As for the future, Bravo is excited about the many possibilities—beyond smart cities, airports, and venues—for the digitization of the physical world. “You can have access to a wealth of unique insights and intelligence that was not even imagined before,” Bravo says. “We are entering a new world of digital transformation.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Multisensory AI: Reduce Downtime and Boost Efficiency

When you’re waiting by the side of the road for the tow truck, isn’t that always the moment when you realize you’ve neglected your 75,000-mile tuneup and safety check? The “check oil” light and low-tire pressure alert can avert dangerous situations, but you can still end up in that frustrating and time-consuming breakdown. Now scale up that inconvenience and lost productivity to the size of a factory, where nonfunctioning machinery can result in hugely expensive downtime.

That’s where predictive maintenance comes in. Machine learning can analyze patterns in normal workflow and detect anomalies in time to prevent costly shutdowns; but what happens with a new piece of equipment, where AI has no existing data to learn from? Can some of the attributes that make humans good—if inefficient—at dealing with novel situations be harnessed for machine-based inspections?

Rustom Kanga, Co-Founder and CEO of AI-based video analytics provider iOmniscient, has some answers for these and other questions about the future of predictive maintenance. He talks about the limitations of traditional machine learning for predictive maintenance; when existing infrastructure can be part of the prediction solution—and the situations when it can’t—and what in the world an e-Nose is (Video 1).

Video 1. Rustom Kanga, CEO of iOmniscient, discusses the impact of multisensory and intuitive AI on predictive maintenance. (Source: insight.tech)

What are the limitations to traditional predictive maintenance approaches?

Today when people talk of artificial intelligence, they normally equate it to deep learning and machine learning technologies. For example, if you want the AI to detect a dog, you get 50,000 images of dogs and label them: “This is a dog. That is a dog. That is a dog. That is a dog.” And once you’ve trained your system, the next time a dog comes along, it will know that it is a dog. That’s how deep learning works.

But if you haven’t trained your system on some particular or unique type of dog, then it may not recognize that animal. Then you have to retrain the system. And this retraining goes on and on and on—it can be a forever-training.

The challenge with maintenance systems is that when you install some new equipment, you don’t have any history of how that equipment will break down or when it will break down: You don’t have any data for doing your deep learning. And so you need to be able to predict what’s going to happen without that data.

So what we do is autonomous, multisensory, AI-based analytics. Autonomous means there’s usually no human involvement, or very little human involvement. Multisensory refers to the fact that humans use their eyes, their ears, their nose to understand their environment, and we do the same. We do video analysis, we do sound analysis, we do smell analysis; and with that we understand what’s happening in the environment.

How does a multisensory AI approach address some of the challenges you mentioned?

We have developed a capability called intuitive AI. Artificial intelligence is all about emulating human intelligence, and humans don’t just use their memory function—which is essentially the thing that deep learning attempts to replicate. Humans also use their logic function. They have deductive logic, inductive logic; they use intuition and creative capabilities to make decisions about how the world works. It’s very different from the way you’d expect a machine learning system to work.

“Multisensory refers to the fact that humans use their eyes, their ears, their nose to understand their environment, and we do the same” – Rustom Kanga, @iOmniscient1 via @insightdottech #AI

What we as a company do is we use our abilities as humans to advise the system on what to look for, and then we use our multisensory capabilities to look for those symptoms. For instance, if a conveyor belt has been installed and we want to know when it might break down, what would we look for to predict that it’s not working well? We might listen to its sound: when it starts going “clang, clang, clang,” something is wrong with it. So we use our ability to see the object, to hear it, to smell it to tell us how it’s operating at any given time and whether it’s showing any of the symptoms that we’d expect it to show when it’s about to break down.

How do you train AI to do this, and to do it accurately?

We tell the system what a person would be likely to see. For instance, let’s say we’re looking at some equipment, and the most likely break-down scenario is that it will rust. We then tell the system to look for rust or for changes in color. Then, if the system sees rust developing, it will tell us that there’s something wrong and it’s time to look at replacing or repairing the machine.

And intuitive AI doesn’t require massive amounts of data. We can train our system with maybe 10 examples of the data set, or even fewer. And because it requires so few data sets, it doesn’t need massive amounts of computing; it doesn’t need GPUs. We work purely on the standard Intel CPUs, and we can still achieve accuracy.

We recently implemented a system for a driverless train. The customer wanted to make sure that nobody could be injured by walking in front of the train. That really requires just a simple intrusion system. In fact, camera companies provide intrusion systems embedded into their cameras. And the railway company had done that—had bought some cameras from a very reputable company to do the intrusion detection.

The only problem was that they were getting something like 200 false alarms per camera per day, which made the whole system unusable. So they set the criterion that they wanted no more than one false alarm across the entire network. We were able to achieve that for them, and we’ve been providing the safety system for their trains for the last five years.

Do your solutions require the installation of new hardware and devices?

We can work with anybody’s cameras, anybody’s microphones—of course, the cameras do have to be able to see what you want to be seen. Then we provide the intelligence. We can work with existing infrastructure for video, for sound.

Smell, however, is a very unique capability. Nobody makes the type of smell sensors that are required to detect industrial smells, so we have built our own e-Nose to provide to our customers. It’s a unique device with six or so sensors in it. There are sensors on the market, of course, that can detect single molecules. If you want to detect carbon monoxide, for example, you can buy a sensor to do that. But most industrial chemicals are much more complex. Even a cup of coffee has something like 400 different molecules in it.

Can you share any other use cases that demonstrate the iOmniscient solution in action?

I’ll give you one that demonstrates the real value of a system like this in terms of its speed. Because we are not labeling 50,000 objects, we can actually implement the system very quickly. We were invited into an airport to detect problems in their refuse rooms—the rooms under the airport where garbage from the airport itself and from the planes that land there is collected. This particular airport had 30 or 40 of them.

Sometimes, of course, garbage bags break and the bins overflow, and the airport wanted a way to make sure that those rooms were kept neat and tidy. So they decided to use artificial intelligence systems to do that. They invited something like eight companies to come in and do proofs of concept. They said, “Take four weeks to train your system, and then show us what you can do.”

After four weeks, nobody could do anything. So they said, “Take eight weeks.” Then they said, “Take twelve weeks.” And none of those companies could actually produce a system that had any level of accuracy, just because of the number of variables involved.

And then finally they found us, and they asked us, “Can you come and show us what you can do?” We sent in one of our engineers on a Tuesday afternoon, and on Thursday morning we were able to demonstrate the system with something like 100% accuracy. That is how fast the system can be implemented when you don’t have to go through 50,000 sets of data for training. You don’t need massive amounts of computing; you don’t need GPUs. And that’s the beauty of intuitive AI.

What is the value of the partnership with Intel and its technology?

We work exclusively with Intel and have been a partner with them for the last 23 years, with a very close and meaningful relationship. We can trust the equipment Intel generates; we understand how it works, and we know it will always work. It’s also backward compatible, which is important for us because customers buy products for the long term.

How has the idea of multisensory intuitive AI evolved at iOmniscient?

When we first started, there were a lot of people who used standard video analysis, video motion detection, and things like that to understand the environment. We developed technologies that worked in very difficult, crowded, and complex scenes, and that positioned us well in the market.

Today we can do much more than that. We do face recognition, number-plate recognition—which is all privacy protected. We do video-based, sound-based, and smell-based systems. The technology keeps evolving, and we try to stay at the forefront of that.

For instance, in the past, all such analytics required the sensor to be stationary: If you had a camera, it had to be stuck on a pole or a wall. But what happens when the camera itself is moving—if it’s a body-worn camera where the person is moving around or if it’s on a drone or on a robot that’s walking around? We have started evolving technologies that will work even on those sorts of moving cameras. We call it “wild AI.”

Another example is that we initially developed our smell technology for industrial applications—things like waste-management plants and airport toilets. But we have also discovered that we can use the same device to smell the breath of a person and diagnose early-stage lung cancer and breast cancer.

Now, that’s not a product we’ve released yet; we’re going through the clinical tests and clinical trials that one needs to go through to release it as a medical device. But that’s where the future is. It’s unpredictable. We wouldn’t have imagined 20 years ago that we’d be developing devices for cancer detection, but that’s where we are going.

Related Content

To learn more about multisensory AI, listen to Multisensory AI: The Future of Predictive Maintenance and read Multisensory AI Revolutionizes Real-Time Analytics. For the latest innovations from iOmniscient, follow them on X/Twitter at @iOmniscient1 and LinkedIn.

 

This article was edited by Erin Noble, copy editor.

Multisensory AI: The Future of Predictive Maintenance

Downtime is a costly killer. But traditional predictive maintenance methods often fall short. Discover how multisensory AI is used to uplevel equipment maintenance.

Multisensory AI uses sight, sound, and smell to accurately predict potential equipment failures, even with limited training data. This innovative approach can help businesses reduce downtime, improve efficiency, and save costs.

In this podcast, we explore how to successfully implement multisensory AI into your existing infrastructure and unlock its full potential.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guest: iOmniscient

Our guest this episode is Rustom Kanga, Co-Founder and CEO of iOmniscient, an AI-based video analytics solution provider. Rustom founded the company 23 years ago, before AI was “fashionable.” Today, he works with his team to offer smart automated solutions across industries around the world.

Podcast Topics

Rustom answers our questions about:

  • 2:36 – Limitations to traditional predictive maintenance
  • 4:17 – A multisensory and intuitive AI approach
  • 7:23 – Training AI to emulate human intelligence
  • 8:43 – Providing accurate and valuable results
  • 12:54 – Investing in a multisensory AI approach
  • 14:40 – How businesses leverage intuitive AI
  • 18:16 – Partnerships and technologies behind success
  • 19:36 – The future of multisensory and intuitive AI

Related Content

To learn more about multisensory AI, read Multisensory AI Revolutionizes Real-Time Analytics. For the latest innovations from iOmniscient, follow them on X/Twitter at @iOmniscient1 and LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to “insight.tech Talk,” where we explore the latest IoT, edge, AI, and network technology trends and innovations. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today I’m joined by Rustom Kanga from iOmniscient to talk about the future of predictive maintenance. Hi, Rustom. Thanks for joining us.

Rustom Kanga: Hello, Christina.

Christina Cardoza: Before we jump into the conversation, I love to get to know a little bit more about yourself and your company. So, what can you tell us about what you guys do there?

Rustom Kanga: I’m Rustom Kanga, I’m the Co-Founder and CEO of iOmniscient. We do autonomous, multisensory, AI-based analytics. Autonomous means there’s usually no human involvement, or very little human involvement. Multisensory refers to the fact that humans use their eyes, their ears, their nose, to understand their environment, and we do the same. We do video analysis, we do sound analysis, we do smell analysis, and with that we understand what’s happening in the environment.

And we’ve been doing this for the last 23 years, so we’ve been doing artificial intelligence long before it became fashionable, and hence we’ve developed a whole bunch of capabilities which go far beyond what is currently talked about in terms of AI. We’ve implemented our systems in about 70 countries around the world in a number of different industries. This is technology that goes across many industries and many areas of interest for our customers. Today we are going to, of course, talk about how this technology can be used for predictive and preventative maintenance.

Christina Cardoza: Absolutely. And I’m looking forward to digging in, especially when you talk about all these different industries you’re working in—railroad, airports. It’s extremely important that equipment doesn’t go down, nothing breaks, that we can predict things and don’t have any downtime. This has been something that I think all these industries have been looking to strive for quite some time, but doesn’t seem like we’ve completely achieved it, or there are still accidents, or the unexpected still happens. So I’m curious, when it comes to detecting equipment failure and predictive maintenance, what have been the limitations to traditional approaches?

Rustom Kanga: Today, when people talk of artificial intelligence, they normally equate it to deep learning and machine learning technologies. And you know what that means, I’m sure. For example, if you want to detect a dog, you’d get 50,000 images of dogs, you’d label them, and you say, “This is a dog, that’s a dog, that’s a dog, that’s a dog.” And then you would train your system, and once you’ve trained your system the next time a dog comes along, you’d know it’s a dog. That’s how deep learning works.

The challenge with maintenance systems is that when you install some new equipment, you don’t have any history of how that equipment will break down or when it’ll break down. So the challenge you have is you don’t have any data for doing your deep learning. And so you need to be able to predict what’s going to happen without the data that you can use for deep learning and machine learning. And that’s where we use some of our other capabilities.

Christina Cardoza: Yeah, that image that you just described—that is how I often hear thought-leaders talk about predictive maintenance, is the machine learning collecting all this data and detecting patterns. But, to your point, it goes beyond that. And if you’re implementing new technology or new equipment, how do you find that you don’t have that data and you don’t have that pattern?

I want to talk about first, though—the multisensory approach that you brought in your introduction, how does this address some of those challenges that you just mentioned and bring more of a natural, I guess, human inspection to predictive maintenance, human-like inspection?

Rustom Kanga: Well, it doesn’t involve human inspection. First of all, as we saw, you don’t have any data, right, for predicting how the product will break down. Well, very often with new products you might have a meantime between failures of, say, 10 years. That means you have to wait 10 years before you actually know how or when or why or how it’ll break down. So you don’t have any data, which means you cannot do any deep learning.

So what are the alternatives? We have developed a capability called intuitive AI which uses some of the other aspects of how humans think. Artificial intelligence is all about emulating human intelligence, and humans don’t just use their memory function, which is essentially what deep learning attempts to replicate. Humans also use their logic function. They have deductive logic, inductive logic; they use intuition and creative capabilities and so on to make decisions on how the world works. So it’s very different to the way you’d expect a machine learning system to work.

So what we do is we use our abilities as a human to advise the system on what to look for, and then we use our multisensory capabilities to look for those symptoms. For instance, just as an example, if a conveyor belt has been put in place, has been installed, and we want to know if it is about to break down, what would you look for to predict that it’s not working well? You might listen to its sound, for instance; you might know that when it starts going clang, clang, clang, that something’s wrong in it. So we can use our ability to see the object, to hear it, to smell it, to tell us how it’s operating at any given time and whether it’s showing any of the symptoms that you’d expect it to show when it’s about to break down.

Christina Cardoza: That’s amazing. And of course there’s no humans involved, but you’re adding the human-like elements into it, say that somebody manually inspecting would look for—if anything’s smoking, if they smell anything, if they hear any abnormal noises. So, how do you train AI to be able to provide this interactive or be able to detect these capabilities when it is just artificial intelligence or a sensor on top of a system?

Rustom Kanga: Exactly how you said you do it: you tell the system what you’re likely to see. For instance, let’s say you’re looking at some equipment, and the most likely scenario is that it’s likely to rust, and if it rusts there’s a propensity for it to break down. You then tell your system to look for rust, and over time it’ll look for the changes in color. And if the system sees rust developing, it’ll start telling you that there’s something wrong with this equipment. it’s time you looked at replacing it or repairing it or whatever.

Christina Cardoza: Great. Now I want to go back to training the AI and the data sets—like we talked about how do you do this for new equipment? I think there’s a misconception for a lot of providers out there that they need to do that extensive training that takes a long time; they need that data to uncover these patterns to learn from them, to identify these abnormalities. So, how is your solution or your company able to do this with less data sets but ensure that it is accurate and it does provide value and benefits to end user or organization?

Rustom Kanga: Well, as I said, the traditional approach is to do deep learning and machine learning, which requires massive data sets, and you just don’t have them in some practical situations. So you have to use other methods of human thinking to understand what is happening. And these are the methods which we call intuitive AI. They don’t require massive amounts of data; we can train our system with something like, maybe 10 examples of the data set or even less. And because you require so few data sets, you don’t need massive amounts of computing, you don’t need GPUs.

And so everything we do is done with very little training, with no GPUs. We work purely on the standard Intel CPUs, and we can still achieve accuracy. Let me give you an example of what I mean by achieving accuracy. We recently implemented a system for a driverless train system. They wanted to make sure that nobody walked in front of the train, because obviously it’s a driverless train and you have to stop it, and that requires just a simple intrusion system.

And there are hundreds of companies who do intrusion. In fact, camera companies provide intrusion systems as part of their—embedded into their cameras. And so the railway company we were talking to actually did that. They bought some cameras from a very reputable camera company and they could do the intrusion, the intrusion detection.

The only problem they had was they were getting something like 200 false alarms per camera per day, which made the whole system unusable. Then finally they set the criteria that they want no more than one false alarm across the entire network. And they found us, and they brought us in, and we could achieve that. And, in fact, with that particular train company we’ve been providing them with a safety system for their trains for the last five years.

So you can see that the techniques we use actually provide you with very high accuracy, much higher than you can get with some of these traditional approaches. In fact, with deep learning you have the significant issue that it has to keep learning continuously almost forever. For instance, you know the example I gave you of detecting dogs and recognizing dogs? You have 50,000 dogs, you train your system, you recognize the next dog that comes along; but if you haven’t trained your system on a particular type, unique type of dog, then the system may not recognize the dog and you have to retrain the system. And this type of training goes on and on and on—it can be a forever training. You don’t necessarily require that in an intuitive-AI system, which is type of technology we are talking about.

Christina Cardoza: Yeah, I could see this technology being useful in other scenarios too, rather than just different types of dogs. I know sometimes equipment moves around on a shop floor or things change, and if you move camera and positioning, usually you have to retrain the AI from there because of that relationship that has been changed. So it sounds like that’s something that it would be able to continue to provide the results without having to be completely retrained if you move things around.

In that railroad example that you gave, you mentioned how they installed cameras to do some of the things that they were looking to do. But if the—I know a lot of times manufacturers shops and the railroad systems, they have their cameras, they’re monitoring for safety and other things. Now, if they wanted to be able to take advantage of your capabilities on top of their already existing infrastructure, is that something that they would be able to do? Or does it require the installation of new hardware and devices?

Rustom Kanga: Well, in that example of the railway we use the existing cameras that they had put in in the first place. We can work with anybody’s cameras, anybody’s microphones. Of course the cameras are the eyes; we are only the brain. So the cameras have to be able to see what you want to see. We provide the intelligence, and we can work with existing infrastructure for video, for sound, for smell.

Smell is a very unique capability. Nobody makes the type of smell sensors that are required to actually smell industrial smells. So we have built our own e-Nose which we provide our customers with. It’s a unique device with something like six sensors in it. You do get sensors in the market, of course, for single molecules. So if you wanted to detect carbon monoxide, you can get a sense of carbon monoxide.

But most industrial chemicals are much more complex. For instance, even a cup of coffee has something like 400 different molecules in it. And so to understand that this is coffee and not tea you need a sensor of the type of our e-Nose which has multiple sensors in it and understanding the pattern that is generated across all those sensors. We know that it is this particular product rather than something else.

Christina Cardoza: So I’m curious, I know we talked about the railroad example, but since your technology spans across all different types of industries, do you have any other use cases or customer examples that you can share with us?

Rustom Kanga: Of course. You know, we have something like 300 use cases that we’ve implemented across 30 different industries, and if you just look at predictive maintenance, it could be a conveyor belt, as I said, that is likely to break down, and you can understand whether it’s going to break down based on its sound. It might be a rubber belt used in an elevator; it might be products that might rust and you can detect the level of rusting just by watching it, by looking at it using a camera. You can use smell; you can use all these different senses to understand what is the current state of that product.

And in terms of examples across different industries, I’ll give you one which demonstrates the real value of a system like this in terms of its speed. Because you are not labeling 50,000 objects you can actually implement the system very quickly. We were invited into an airport to detect problems in their refuse rooms. Refuse rooms are the garbage rooms that they have under the airport. And this particular airport had 30 or 40 of them where the garbage from the airport and from the planes that land over there and so on—it’s all collected over there.

And of course when the garbage bags break and the bins overflow, you can have all sorts of other problems in those refuse rooms, so they wanted to keep these neat and tidy. And to make sure that they were neat and tidy, they decided to use artificial intelligence systems to do that. And they invited, I think it was about eight companies to come in and do POCs over there—proofs of concept. Now they said, “Take four weeks. Train your system and show us what you can do.”

And after four weeks nobody could do anything. So they said, “Take eight weeks.” Then they said, “Take twelve weeks and show us what you can do.” And none of those companies could actually produce a system that had any level of accuracy, just because of the number of variables involved. There are so many different things that can go wrong in that sort of environment.

And then finally they found us, and they asked us, “Can you come and show us what you can do?” So we went, sent in one of our engineers on a Tuesday afternoon, and on that Thursday morning we were able to demonstrate the system with something like 100% accuracy. That is how fast the system can be implemented, because you don’t have to go through 50,000 sets of data that you have to train. You don’t need massive amounts of computing, you don’t need GPUs. And that’s the beauty of intuitive AI.

Christina Cardoza: Yeah, that’s great. And you mentioned you’re also using Intel CPUs. I should mention, insight.tech and the “insight.tech Talk,” we are sponsored by Intel. So I’m curious, how do you work with Intel? And the value of that partnership and the technology in making some of these use cases and solutions successful.

Rustom Kanga: Being a partner of Intel for the last 23 years, and so we work exclusively with Intel, we’ve had a very close and meaningful relationship with them over these years. And we find that the equipment that they generate has benefit in that it is—we can trust it, we know it’ll always work, we understand how it works. It’s always backward compatible, which is important for us because customers buy products for the long term. And because it delivers what we require, we do not need to use anybody else’s GPUs, and so on.

Christina Cardoza: Yeah, that’s great. And I’m sure they’re always staying on top of the latest innovation, so it allows you to scale and provides that flexibility as multisensory AI continues to evolve. So, since you said in the beginning you guys started with AI before it was fashionable, I’m curious, how has it evolved—this idea of multisensory intuitive AI? How has it evolved since you’ve started, and where do you think it still has to go, and how will the company be a part of that future?

Rustom Kanga: Well, it’s been a very long journey. When we first started we focused on trying to do things that were different to what everybody else did. There were a lot of people who used standard video analysis, video motion detection, and things like that to understand the environment. And we developed technologies that worked in very difficult, crowded, and complex scenes that positioned us well in the market.

Today we can do much more than that. We can—we do face recognition, number-plate recognition—it’s all privacy protected. As I said, we do video-, sound-, and smell-based systems. Where are we going? The technology keeps evolving, and we try and stay at the forefront of that technology.

For instance, in the past all such analytics required the sensor to be stationary. For instance, if you had a camera, it had to be stuck on a pole or a wall somewhere. But what happens when the camera itself is moving? For instance, on a body-worn camera where the person is moving around or on a drone or on a robot that’s walking around. So we have started evolving technologies that’ll work even on those sorts of moving cameras, and we call that “wild AI.” It works in very complex scenes, in moving environments where the sensor itself is moving.

Another example is where we’ve started—we’d initially developed our smell technology for industrial applications, for things like waste-management plants, for things like airport toilets. They clean the toilet every four hours, but it might start smelling after 20 minutes. So the toilet itself can say, “Hey, I’m Toilet Six, come back and clean me again.” It can be used in hospitals where a person might be incontinent and you can say to the nurse, “Please go and help the patient in room 24, address the smelling.” And so on. It can be used for industrial applications of a number of types.

But we also discovered that we could use the same device to smell the breath of a person, and using the breath we can diagnose early-stage lung cancer and breast cancer. Now, that’s not a product we’ve released yet. It is—we are going through the clinical tests and clinical trials that one needs to go through to release this as a medical device, but that’s where the future is. It’s unpredictable. We wouldn’t have imagined 20 years ago that we’d be developing devices for cancer detection, but that’s where we are going.

Christina Cardoza: It’s amazing to see, and I can’t wait to see what else the company comes up with and how you guys continue to transform industries and the future. I want to thank you, Rustom, again for coming onto the podcast; it’s been a great conversation.

And thanks to our listeners. I invite all of our listeners to follow us along on insight.tech as we continue to cover partners like iOmniscient and what they’re doing in this space, as well as follow along with iOmniscient on their website and their social media accounts so that you can see and be a part of some of these technologies and evolutions that are happening. So thank you all again, and until next time this has been “insight.tech Talk.”

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.