AI PCs Put the Power of Edge AI in Your Hands

From the start, personal computers have offered exciting new ways to boost productivity, collaboration, and creativity. Over the past 40+ years, the exponential growth in the power and functionality of PCs has continued to dramatically change the way we work, create, entertain, and play.

Today, the AI PC is a transformational shift in personal computing. With AI-enhanced software and a new generation of computing power, this PC is built to be the digital productivity partner we can all benefit from.

Now, AI is no longer just for data scientists and developers or limited to the cloud. The path to democratizing AI is giving everyone from factory engineers to artists, teachers, and others the potential to benefit from the AI applications they need—when they need them.

The Market’s First AI PC: Intel Power and Performance for Any AI Workload

Intel leads the way with its AI PC, launched in September 2023. At its heart is the Intel® Core Ultra processor, designed to optimize efficiency and performance of AI software. The CPU splits tasks between a graphics processing unit (GPU) and a neural processing unit (NPU) to handle heavy AI workloads and efficiently run sustained workloads.

What makes the Intel AI PC user experience remarkable is the ability to automate tedious daily tasks. Imagine the time savings when you can write and send email with a one sentence prompt, plan a meeting agenda, take notes, or automate image and video editing—all powered by AI.

Alongside AI PC capabilities, software delivers the functionality needed to develop and run these critical applications. Intel works with an ecosystem of more than 100 independent software vendors (ISVs), supporting their development of solutions that fully leverage the AI PC’s fast and intuitive capabilities.

Here is just a sample of how Intel partners around the world are innovating across a wide range of verticals and edge AI use cases.

Conversational AI Advances Staff and Customer Communications

One emerging area is conversational AI, which can transform the way businesses engage with their customers, offering personalized solutions while enhancing operational efficiencies, data analysis, and decision-making processes. Its partners like Dhee.ai and its Conversational VOICE AI platform make it possible across various industries. For example, the company empowers banking sales and services to better interact with customers in regional Indic languages. Dhee.ai uses AI, machine vision, and natural language processing (NLP) to facilitate seamless communication across different languages.

The Intel AI PC provides the processing power required to execute the compute-intensive tasks of speech recognition and synthesis. And the Intel® OpenVINO toolkit is key to the Dhee.ai optimized Whisper model inference latency for real-time conversations.

Other businesses that benefit from conversational AI solutions include entertainment venues and restaurants. One example is Vistry, Inc., which deploys ZenoChat, a gen AI chat assistant across multiple use cases in these segments. Running its software on the Intel AI PC and using OpenVINO optimized models ultimately enables the company to deliver a better end-user experiences at the edge.

The Vistry Order Chatbot for restaurants is both a customer and employee assistant for placing and taking orders. A virtual assistant platform includes order track and trace, predictive order make time, automated curbside check-in, and inquiries for customers and employees.

ZenoChat was recently deployed at the Formula 1 Circuit of the Americas event in Austin, Texas to facilitate communication between employees and visitors. The edge AI platform helped staff handle hundreds of distinct queries in multiple languages over the course of the three-day event.

AI PCs Power Data Security in Healthcare

Edge AI also plays a big role in data security. Healthcare is a great example, where patients’ personal information can be at serious risk. Tausight, a leader in PHI (Protected Health Information) security intelligence, helps hospitals and clinics reduce cybersecurity incidents by simplifying the detection and management of PHI risk.

The company’s situational PHI awareness platform uses machine learning, federated learning, and NLP to streamline data protection. A custom NLP model is deployed directly on an Intel Core Ultra processor-powered PC for real-time detection, possible redaction, and data protection—ensuring compliance with privacy laws and regulations. Alongside enhanced security, the platform enables improved system performance, cost savings, and efficiency—the foundation for safe and secure medical facility operations.

As organizations look to streamline workflow and optimize operations, the #AI PC also provides the powerful and #secure platform they need. @intel via @insightdottech

Low-Code Platform Democratizes AI

As organizations look to streamline workflow and optimize operations, the AI PC also provides the powerful and secure platform they need. And low-code software takes it further by simplifying and shortening edge AI application development time. Intel partner Iterate.ai, an AI platform developer, makes it possible for organizations to build their own generative AI large language models (LLMs) from internal data and documents with its GenPilot platform.

Even for complex tasks like financial planning, logistics management, answering customer emails, and interpreting documents across databases, a low-code platform enables organizations to develop custom AI applications much faster, and allows companies to select the models they prefer. Plus, low-code platforms like GenPilot make it more straightforward to maintain and customize to accommodate future use cases.

As software makers build in increasingly advanced features, the demand for more powerful hardware increases. And as hardware becomes more capable, there’s more room for software innovation. The AI PC is the future of computing, and that future is here today.

You’ll find AI PC creativity and content development, productivity, security, manageability, collaboration, and other solutions—all designed by Intel partners—on the Intel AI PC website.

Note: AI features may require software purchase, subscription or enablement by a software or platform provider, or may have specific configuration or compatibility requirements. Details at intel.com/AIPC. Results may vary.

 

Edited by Christina Cardoza, Editorial Director for insight.tech.

Generative AI Chatbots Simplify the Way We Work

For today’s public-facing employees, managers are challenged to equip their staff with the complex and ever-changing information they need to serve visitor needs. And behind the scenes, it can be difficult to train skilled workers to execute complex procedures and follow compliance regulations. “Everyone’s role is increasingly complex,” says Atif Kureishy, Founder and CEO of Vistry, a leader in conversational AI. “Workers need to have access to real-time information, or be fluent and knowledgeable in a very specialized domain.”

Employers deal with their own issues, too, as they look for innovative ways to optimize labor spend in the face of economic headwinds and a complex operational landscape. Generative AI chat assistants can help solve these problems for employees and employers alike.

Generative AI Tools at the F1 COTA

Vistry’s AI Chat Staff Assist deployment at the Formula 1 (F1) Circuit of the Americas (COTA) in Austin, Texas is a powerful example of the value of such solutions.

With 450,000 attendees and 10,000 staff members, the sheer scale of the COTA event was a challenge in itself. It was difficult to answer visitor questions on topics as varied as ticketing, transportation, schedules, and facilities. Working with COTA officials and their IT partners, Vistry customized the ZenoChat gen AI chat assistant, which staff members accessed via their mobile devices.

The #GenerativeAI model was trained using event-specific #data to ensure accuracy, and was equipped with a multilingual user interface to facilitate communication between employees and visitors. @vistryai via @insightdottech

The generative AI model was trained using event-specific data to ensure accuracy, and was equipped with a multilingual user interface to facilitate communication between employees and visitors. The solution enabled real-time responses to queries—enhanced by third-party mapping software to help workers provide directions to guests and navigate the sprawling venue themselves.

The results pleased both COTA leadership and staff. Vistry’s AI platform was able to handle hundreds of distinct queries over the course of the three-day event. Even with questions becoming more frequent and complex, workers continually grew comfortable with the tool. The result was greater efficiency—and less stress than in previous years. As one staff member put it, the AI assistant “turned potential chaos into orchestrated excellence.” Event organizers, for their part, found that the AI chatbot increased employee preparedness and enhanced guest experiences.

AI Chat Use Cases in Life Sciences

AI chat solutions offer clear benefits in a wide range of vertical segments and use cases. In life sciences manufacturing, for example, customized AI assistants can support workers in laboratories or on the factory floor. These employees need real-time access to information—often stored in extensive, inaccessible documentation—to ensure that they follow the proper chemical manufacturing and control protocols and meet the compliance requirements of regulatory bodies.

A well-trained AI tool can help manufacturing, quality assurance, and R&D teams find answers to their bill of material (BOM) questions, explore and understand the dependencies between their raw materials suppliers and suppliers of other types of components and equipment, and access other detailed information they need to do their jobs.

Achieving compliance with REACH, OSHA SDS, and GHS regulatory frameworks is essential for life sciences firms to ensure safety and operational success. The ZenoChat platform offers a powerful solution by creating a knowledge graph of compliance information, automating documentation processes, and enhancing competitive analysis. As a result, life sciences firms can streamline compliance efforts, reduce errors, and stay ahead in a competitive industry while ensuring they meet all regulatory standards.

Of course, in industries where accuracy and precision are paramount, the risk of an “AI hallucination,” a type of bug in which generative AI tools offer incorrect information with deceptive certainty, is a major concern. But Kureishy says it’s possible to improve the accuracy of AI chat assistants for such use cases. “Our models are based on a retrieval-augmented generation (RAG) architecture to ground their responses in a far more limited and trustworthy set of enterprise data and are combined with knowledge graphs to further improve the accuracy and relevance of replies.”

The result is an AI model that minimizes the risk of hallucinations found in other LLMs—and that can be made even more reliable via a system of checks and validations.

AI PCs and Software Toolkits Enable Edge Deployments

While RAG-based architectures improve accuracy, they still don’t address the other major concern of industrial enterprises: data security. For use cases that involve sensitive intellectual property, even the generally robust protections of a cloud-deployed model can constitute an unacceptable risk.

This is one reason Vistry makes it possible to run its AI chat tools entirely at the edge—a deployment mode that owes much to the company’s technology partnership with Intel.

“We were very excited to see how easy it was to deploy a RAG-based system at the edge running on an Intel AI PC,” says Kureishy. “Intel® Core Ultra processors are amazingly performant, even when working with GPU-intensive inferencing workloads that these AI models require.”

In addition, Vistry used the Intel® OpenVINO toolkit to optimize its AI models for edge deployment while still delivering the speed and accuracy required to preserve user experience.

The ability to deploy AI chat assistants completely at the edge allows highly risk-averse users to take advantage of these solutions. And beyond that, it supports businesses that operate in remote locations, where connectivity is an issue—giving every enterprise a way to ensure continued service in “disconnected” mode in case of an IT outage.

Unlocking the Value of Unstructured Data

In the years ahead, more and more organizations are likely to turn to AI tools to help their employees increase efficiency and productivity—in part because there are simply so many sectors in which workers need fast, accurate answers to their questions.

“The expectation is for a person to be able to consume unstructured, document-oriented information readily, but when that information has proliferated and is settled in various content management systems, there’s a real gap: one we’ve all lived, personally and professionally,” says Kureishy. “AI assistants finally give employees easy, real-time access to all of that information—and that’s going to be incredibly valuable for a lot of businesses.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Smart City Software Suite for Urban Management

Cities across the globe grapple with challenges brought on by rapid population growth. This surge has led to social and economic imbalances, resource depletion, and strained public services. Citizens increasingly are burdened by inefficient healthcare, ineffective public safety, inadequate energy and water management, and poorly managed public transport systems.

Outdated and disjointed technology exacerbates these issues, making it difficult for urban administrators to maintain effective operations and respond to the needs of their citizens. Without accurate information about hazards, environmental conditions, and the state of infrastructure, handling emergencies and planning for the future becomes a daunting task.

Innovative technologies offer a solution to these challenges. By managing services through a comprehensive AI and IoT platform, city administrators can respond to issues in real time, improve coordination and efficiency, remotely monitor and maintain infrastructure, and gain valuable insights that support better decision-making. This approach not only enhances public safety and resource management but also fosters a more balanced and sustainable urban environment.

Streamline Operations with Smart City AI

Many of the problems cities face today arise from lack of interconnected urban systems and absence of digital technologies. City departments often operate in silos, without a centralized, data-driven model to support planning, decision-making, and efficient operations. “Siloed data makes it difficult to manage complex issues and respond to emergencies,” says Vivek Sharma Panathula, Business Management Lead at Trinity Mobility, a provider of smart city platforms and applications. Collaboration is essential, not only for managing emergency responses, but for ensuring that services reach where they’re needed and resources are effectively deployed.

Trinity’s Smart City Solution addresses these challenges through a unified city digital platform powered by IoT and AI technology, along with pre-integrated applications that form the foundation of a smart city. This intelligence is built through three layers: the platform, bundled applications, and various user personas, all of which integrate city operations into a cohesive system that drives digital transformation.

The Smart City Digital Platform can seamlessly connect to any sensor, device, or application system to collect, process, store, and analyze data, helping detect anomalies and enabling proactive responses. The pre-integrated bundled applications—such as the City Operations Centre, Mobile Workforce Management System, Citizen Engagement System, Open Data Portal, and Smart City Solutions for various departments provide out-of-the-box support to kick-start and accelerate the smart city journey.

As cities expand their infrastructure by connecting it to an #IoT platform, they can manage services more efficiently and effectively. @trinity_iot via @insightdottech

Building a Smart City with AI and IoT at Scale

As cities expand their infrastructure by connecting it to an IoT platform, they can manage services more efficiently and effectively. A prime example is the new administrative capital for urban development (ACUD) near Cairo, Egypt, where Trinity, in partnership with master systems integrator Honeywell, is rolling out its Smart City Software Suite for the capital to manage a wide range of services for more than 6 million people.

“ACUD selected Honeywell for developing an integrated Command and Control Center (CCC) to coordinate field-level sensors with departmental data,” explains Panathula. The city aims to streamline operations, enhance sustainability, and improve citizen services through this massive project.

Using Trinity’s platform, city administrators can optimize emergency response, monitor essential services like electricity and water, and ensure efficient resource management. The platform also allows departments to track their fleets and monitor drivers’ behaviors, while providing tools for citizens to interact directly with city services. Through a mobile app, residents can report issues, set up services, and pay fines, fostering a more engaged and responsive city government.

By analyzing data from these interactions and citizen feedback, the city can continuously improve its operations and policies, ensuring that the smart city evolves in line with the needs and expectations of its residents.

To manage large volumes of sensor data, a municipal IoT system must be fast and efficient. The Trinity platform uses high-performance Intel® Xeon® processors to analyze sensor data in near-real time and deliver quick responses. The Intel® OpenVINO toolkit eases the development of AI and IoT applications, while Intel® Software Guard Extensions (Intel® SGX) work behind the scenes to keep citizens’ personal information private and secure.

Digital Twins and Future Innovation

As cities embrace more connected services, IoT technology rapidly advances. For instance, Trinity is developing a digital twin platform to enhance the Misk City project in Riyadh, Saudi Arabia. This platform will enable designers, engineers, and construction teams—whether on-site or remote—to collaborate in real time using functional 3D models of the development. Once completed, the digital twin will continue to provide value by optimizing emergency response, guiding first responders to critical locations, and ensuring the safety of workers in case of an incident. It’s just one of many applications gaining traction as IoT solutions catch on throughout the world, says Panathula. Could the next smart city be your own?

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Accelerate Interventional Radiology System Development

Doctors have long relied on diagnostic radiology to identify and pinpoint patient disorders before treatment. Modern interventional radiology (IR) equipment allows them to obtain real-time medical images in the operating room as a procedure unfolds, informing their work and speeding progress.

Hospitals are eager to expand the use of IR, which can lower costs, reduce risks, improve patient outcomes, and shorten hospital stays. However, the equipment makers of these systems find it tough to meet their needs. Building an IR machine that can rapidly analyze medical images and deliver on-the-spot guidance is technologically complex, and medical equipment requires lengthy certification reviews before it can be released.

Embedded computing OEMs help medical equipment providers overcome many hurdles. For example, companies like Siemens Healthineers, a leading innovator in healthcare tech; HY Medical, a developer of computer vision medical imaging systems; and digital surgery platform provider Caresyntax can obtain expert assistance with product design and technology selection, achieve certification faster, and deliver equipment to customers sooner.

By incorporating the right hardware into their equipment, they can assure hospitals and clinics will receive support for many years, sparing them the expense of costly upgrades and fixes.

“IR systems comprise a mix of #hardware and #software components that must work together seamlessly to provide near-real-time results and are complicated to design.” Prodrive Technologies via @insightdottech

Using AI in Medical Imaging

IR systems comprise a mix of hardware and software components that must work together seamlessly to provide near-real-time results and are complicated to design. And because the equipment is used for treating patients, it is classified as safety-critical and must meet exacting technical requirements to achieve certification. Bringing a new product to market often takes years.

Prodrive Technologies, a global OEM and manufacturer of embedded computing systems, is a prime example of how having a partner with deep healthcare technology experience is essential for medical equipment builders. “Building, testing, and integrating complex systems is what we’ve been doing for the past 30 years,” says Bartosz Straszak, Sales Manager of Prodrive.

In addition to radiology technologies, the company also has expertise in technologies essential for medical machine operations, including motion control, which is needed to stabilize the C-arm that many X-ray and CT machines use during procedures. The C-arm swivels around the patient to capture real-time, high-resolution images from many angles, giving physicians a multidimensional view of the surgical area without having to move the patient. Prodrive also produces gradient amplifiers, which modulate the delivery of magnetic fields in MRI machines.

“From the point where an image is captured to the point where it’s processed, we have all the experience and components to help developers build their systems. When they come to us with custom requirements, we can incorporate them and provide a complete product,” Straszak says.

Prodrive also tests, validates, and certifies safety-critical products at its own facilities, potentially shaving years off development time. Though the company does not design software, it can help builders make decisions about deploying it effectively. For example, if a builder wants to use AI software to automatically annotate medical images in near-real time, Prodrive can help them select the best hardware platform for running the program efficiently.

Prodrive can also refer equipment builders to software partners, to assist with tasks such as training computer vision models to recognize medical images. “We help our customers by introducing them to the right software partners. It becomes a three-way partnership,” Straszak says.

Supporting Critical Systems with High-Performance Computing

Fast, reliable hardware lies at the heart of Prodrive IR systems. “Image processing is a very computationally intensive process, and we rely on Intel components as the base,” Straszak says.

Prodrive’s Zeus servers use 4th Gen Intel® Xeon® Scalable processors and 5th Gen Intel® Xeon® Scalable processors to process data-heavy images almost instantaneously. The company’s Poseidon industrial PCs use 13th Gen Intel® Core processors and 14th Gen Intel® Core processors, allowing medical staff to do real-time AI image analysis and editing—a capability recently made possible by improvements in Intel processors’ speed and efficiency.

“The latest generations of Intel Core processors are three or four times more powerful than those of five years ago. That allows builders to create solutions that were previously too expensive to be commercially viable,” Straszak explains.

Another boon for equipment makers—and their hospital customers—is the electronic hardware’s long-term support. “Intel provides extremely long life cycles for embedded computing equipment—up to 15 years. That means that even if product development takes two to three years, the equipment can still be manufactured unchanged for 12 years,” Straszak says.

Reliability and ongoing maintenance are especially strong selling points for hospital machines. Making even small changes to the hardware that’s used near patients can trigger the need for recertification, indefinitely delaying the deployment of sought-after equipment.

The Evolving IR Future

As IR deep-learning models gather more data from systems in operation, their accuracy and capabilities will continue to improve. In the future, generative AI may play a part in annotating diagrams for medical staff, creating summaries of surgical reports, and perhaps even more.

“In many cases, AI can see details that the human eye cannot, but it sometimes struggles to make accurate decisions,” Straszak says. “Generative AI may be able to explain AI decisions, which could then be verified by humans. We must be able to trust AI output to create new features.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Embedded Systems: Balancing Power, Performance, and AI

In the embedded systems industry, it’s all about finding the balance: edge versus cloud; software versus silicon; high performance versus speed; power consumption versus efficiency. And don’t forget cost! Many options are available to power many different applications in many industries, especially with edge and AI becoming more prevalent.

We asked Alex Wood, Global Marketing Director at Avnet and Tria Technologies (formerly Avnet Embedded), about all things embedded systems. He spoke about the wide range of industries that depend on embedded systems, the importance of AI to the whole industry, and finding the right balance of processor power to outcome (Video 1). Just because a company can have the newest, fastest processor, should it? And just because that company deploys it to create some shiny new technology or app, is it something its customers will want to use?

Video 1. Alex Wood from Tria Technologies discusses real-world edge AI applications and their demands. (Source: insight.tech)

What are the current technology trends in the embedded systems space?

I think we’re at a nexus point in the industry. With AI there’s a lot of emphasis on putting things into the cloud; and then there’s a lot of pushback from people who want to put things on the edge. And both of them have their own challenges and potential setbacks. Customers are saying to us, “We want to leverage this, but we’re not entirely sure how to leverage it.”

What are some of the challenges of driving AI at the edge?

I think power is the key thing—that’s going to be the make-or-break for AI. AI consumes a vast amount of data and is super power hungry. It’s making Bitcoin look almost power efficient right now. And a lot of businesses don’t realize how much power those applications consume at the edge. They’ve outsourced the demand to a data center so they don’t see the challenges firsthand.

So I think reducing the power requirements of performing these applications is going to be a key challenge. That’s going to determine whether or not AI sticks around in this hype cycle—depending on how you define AI and how it works. As these applications access and absorb large data models and process things in real time, they will all require more energy-efficient and more heat-efficient processing.

With #AI there’s a lot of emphasis on putting things into the #cloud; and then there’s a lot of pushback from people who want to put things on the #edge. And both of them have their own challenges and potential setbacks. @Avnet via @insightdottech

What sorts of applications are your customers building these days?

There’s loads of different things we’re working on with customers at the moment. One example is in new farming applications, where artificial intelligence is being used as an alternative to things like putting dangerous forever chemicals into the soil or just for more efficient farming.

You can train an AI robot to identify weeds in the field and to tell weeds and pests apart from crops and non-harmful animals. Otherwise a human has to walk through the fields taking photos of the different plants and then educating the people working in those fields. You can create an AI application that does the crop checking for you.

You want to be able to program the robot at the edge to be able to do that edge-based AI recognition; you don’t necessarily want to put all of that content into a data center. You don’t necessarily have a reliable cell data connection in that circumstance either. And vision is where the jump is in terms of the processing requirements—live-vision AI that is able to identify what it’s looking at as quickly as possible and then act on that identification in a short amount of time, instead of having to send signals back to a data center for crunching.

At the opposite end of the spectrum there are things like automatic lawnmowers for people at home so they can map out the best path around the lawn. One is a big, future-facing altruistic solution; the other one is a more practical, real-life solution. But it’s those practical challenges in the real world that really put the technology to the test.

What should users consider when it comes to high-performance processors?

A lot of our customers will have different tiers of their product that they’re creating for different markets. In mass-scale agriculture—say in America with the giant fields—they want to be able to do things at speed, and they’ll have a top-of-the-range solution to cover a huge amount of distance on a giant farm. They also have the ability and the money to invest in that. There will also be a slightly slower, slightly cheaper, mid-range application and then a lower-range option as well.

For me, the industry is driven forward by the actual application. I’m always reminded of the picture that does the rounds on the internet: it’s of a field and the manufactured path that leads around its outer corner. And then there’s a trodden path that goes diagonally across the field where people have just chosen to walk. It’s design versus user experience.

We’ve seen that in the AI/IoT space recently, where there was all this exciting talk about what was possible, but at the end of the day what was successful was defined by people actually using it and finding it useful. I recently upgraded my aging fridge to a semi-IoT model that tells me if the door is open or if the temperature is too high or too low. I don’t need one with a screen in it that gives me information about the weather—I’ve got a separate display in my kitchen for that—and I don’t need a camera in there. But I do like it if it warns me if the door’s been left open. Those real-life applications are what stick around.

How important is the processor-RAM combination?

It connects up with what I said before about power efficiency. If I’m building a gaming PC, I want to have a higher frame rate so videos will render faster. But the last time I upgraded my graphics card, I had to get a PSU that was twice the size of the previous one. I was pushing a thousand watts to run a proper PC rig when it used to be that 300 watts was a lot. There’s all of the innovation, the excitement, the things you could add. But then, realistically, you have to run it with a certain amount of power draw in order to get what you want. You’ve got to sacrifice something to get something else.

Think about an electric car: You add loads of bells and whistles to it, so it gets heavier and heavier, to the point that the range drops. And then if you want a long-range model, you’ve got to increase the aerodynamics, and that means stripping out things like power seats in order to reduce the weight. So you’ve got to find that middle space, that sweet spot in these sorts of applications.

For most customers, it’s not so much about getting a more powerful processor or the most powerful processor; it’s about balancing consumption, longevity, and capability that’s specific to the application. Of course, for other customers there is a marketing element to it: They want to buy the absolute top of the range, the flagship processor, when they might not need it. Sometimes, though, they actually do—it depends on the application. I would rather sit down with the customer and say, “Tell me what you’re actually building.” Rather than, “You need the top of the range. You need the i9 immediately.”

How does Avnet/Tria Technologies meet users’ range of requirements?

I think we’ve got a pretty good range, one that goes from tiny little low-power compute applications all the way up to the COM-HPCs with server-grade Intel processors in them. Those are designed for edge-based image processing and AI applications, but they’re larger as well. So you have to have that balance between size and power consumption and then what they’re capable of.

A lot of the larger modules, the COM-HPC modules, they’re motherboard-sized, which means that you’ve got to put them inside a dedicated case. You couldn’t just embed them directly into a product unless it was a really big product. Public transportation is one big-product thing we’re working on at the moment. For things like that, being able to take data from a huge number of sensors from a train or a train station, analyze it all, react to it all in real time—that pretty much requires an on-location server. Also, sometimes you can’t rely on the data network being reliable.

Can you talk about that partnership between Avent/Tria and Intel.

One example of what we’re working on with Intel is cobotics—cooperative robotics—with one of our customers: building real-time image sensors into a cobotics environment so that a robot can operate safely in the same space as a human. If a human moves into the robot’s space, the robot arm stops moving; if the human picks something up, the robot knows where that thing is and can take it from the human again.

We demonstrated an early example of that at embedded world in Nuremberg this year. The image processing was built around a combination of Intel-based SMARC modules and then our Intel-based COM-HPC modules. Those two things communicate with each other to analyze the signals from the cameras, and then they communicate with the robot in real time as well.

The processor we use for our customers depends on the size and the shape of the module that it needs to go into. We typically offer the Intel Atom® and the Intel® Core series, and the Intel® Xeon® series at the server end. It’s really cool to see what the product team does, putting things into such a small space. I’ve been working with motherboards and processes for motherboards for years, so to see this sort of computing application in such a small package with all the thermal management—it’s a fine art.

And then it’s a fascinating challenge for us to develop applications in the environment that the product’s going to be used in. Being able to deploy an Intel processor and its capabilities—and the new AI-based processes we’re working on as well—to bake those into a small product to use at the edge is pretty exciting.

How is the latest AI technology helping the embedded systems industry advance?

I was at a recent Intel AI event, and the applications around how AI can accelerate an application at the edge were fascinating. There were things like supermarket-checkout applications that automatically recognize the product you’re holding, as well as supermarket queue-management automation.

Dell was up on the stage at the event showing the laptops they’re going to be releasing with built-in AI applications—so it’s an AI device instead of a computing device and something that’s really leaning into that collaborative AI-application environment. Intel showed a case study video of scouting athletes in Africa for their future Olympic potential based on an image-processing platform. That was really cool and really captured my imagination.

I think that AI is at a nexus point at the moment, and I think edge computing is at a nexus point as well—being able to take AI applications away from the cloud and put them onto the device. It’s a really exciting time to be working in computing on a small-form factor with AI in this space.

Related Content

To learn more about the latest edge AI innovations, see what Intel partners across the global doing in their industries, and listen to our podcast on Beyond the Hype: Real-World Edge AI Applications. For the latest innovations from Tria Technologies, follow them on LinkedIn.

This article was edited by Erin Noble, copy editor.

See the Bigger Picture with 3D LiDAR Applications

Video imaging has become a cornerstone of business operations, but its limitations are clear. Challenges like low light, obstructions, and tracking multiple targets hinder its effectiveness. Enter 3D LiDAR: a solution known for its unparalleled situational awareness and accuracy.

While 3D LiDAR isn’t a new technology, it’s capabilities are becoming more affordable and accessible to businesses. In this episode, we explore how to unlock the full potential of 3D LiDAR in your business. We discuss real-world applications, overcome implementation hurdles, and discover how this innovative technology can drive your business forward.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guest: Quanergy

Our guest this episode is Gerald Becker, VP Market Development and Alliances at Quanergy, an AI-powered 3D LiDAR solution provider. Prior to joining Quanergy, Gerald served as Senior Director of Sales and Business Development at the AI and computer visions apps company SAFR by RealNetworks. At Quanergy, he leads the identification and development of strategic channel partnerships in the security, smart city, and smart spaces markets.

Podcast Topics

Gerald answers our questions about:

  • 1:51 – Bringing 3D LiDAR beyond autonomous vehicles
  • 6:18 – Making 3D LiDAR more accessible to businesses
  • 7:42 – Implementing 3D solutions into existing infrastructure
  • 11:47 – Gaining actionable insights and decision-making
  • 14:48 – Real-world 3D LiDAR application results
  • 19:32 – The partnerships and technology behind the solutions
  • 22:01 – Emerging trends and technologies to look out for
  • 23:58 – Final thoughts and key takeaways

Related Content

For the latest innovations from Quanergy, follow them on Twitter at @quanergy and LinkedIn.

Transcript

Christina Cardoza: Hello, and welcome to “insight.tech Talk,” where we explore the latest IoT, AI, edge, and network-technology trends and innovations. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re talking to Quanergy’s Gerald Becker about advancements in 3D LiDAR. Hey, Gerald, thanks for joining us.

Gerald Becker: Thanks for having us.

Christina Cardoza: Before we jump into the conversation, if you could just give us a brief background of yourself and the company, that would be great.

Gerald Becker: So, a little bit about myself, Gerald Becker. I’ve been in the security space for a couple of decades now, a little over 20 years, and I’ve been in every facet of the industry from being an end user, systems integrator, and even more so to more latest at times I’ve been on the manufacturing side, which is where I really found my stride. I’ve been with Quanergy just shy of four years now.

Quanergy is a manufacturer of 3D LiDAR–sensing hardware, and we’re also a developer of software. We were founded in 2012, and we were one of the first LiDAR companies to actually come out in the space commercially, originally targeting autonomous vehicles, right? The holy grail. So we were one of the first companies to come out commercially and be able to offer a turnkey solution to various markets, which we’ll go into here in a bit.

So, really looking forward to this discussion. Thank you, Christina.

Christina Cardoza: Yeah, absolutely. And excited to jump in. You mentioned the company’s been around since 2012, so obviously LiDAR—3D LiDAR—it’s not a new technology, but I feel like it’s been gaining a lot more interest lately. And you said it started in automated driving, but now it’s spanning across different industries and different businesses.

So I’m just curious, if we could start off talking about what is 3D LiDAR exactly when we’re talking? How does it go beyond automated cars, and what are the pain points that businesses are trying to solve with it today?

Gerald Becker: This is the fun stuff, right? So, there’s a lot of applications for LiDAR. Predominantly everybody knows LiDAR being used for automotive and robotics and stuff like that. Also terrestrial mapping. So, putting these on drones and mapping environments to understand is there a pyramid hidden behind these rainforests, and stuff like that. A lot of cool applications that have been out there for years and years. So LiDAR is absolutely not a new technology. It’s been around for decades—very, very long time. It’s not until, I would say, the past 10 years that we’ve really started going beyond the comfort zone of what LiDAR can do.

In my role within the organization, I head up the physical-security, smart space, and smart city market sectors. And with that being said, there’s so much applicability as far as what you could do with 3D LiDAR in those three markets, because they’ve always been confined to a 2D space—like what we’re seeing on this camera. In those spaces they’ve always predominantly used, like, radar, camera, other types of IoT sensors that have always either been 1D or 2D technologies.

But now, with the advent of 3D technologies and the integration ecosystem that we’ve developed in the past few years, we now provide so much more flexibility to see beyond the norm, see beyond two dimensions, see beyond what’s been the common custom of sensing in this space.

So, for security we’re doing some very, very big things. In security, because they’ve predominantly been using radar and camera and video analytics, 3D sensing is now being able to provide additional capabilities where we provide depth, where we provide volume, but even more so in 360, with centimeter-level accuracy. Now, what that does for security applications is that that brings down the TCO advantage—or, I’m sorry, it increases the TCO advantage—compared to all legacy technologies, but it decreases the amount of false alarms to actually activate and track and see what is real and what isn’t, right?

So in these legacy technologies, anytime that there’s a movement or an analytic tracks a potential breach or something like that, it automatically starts triggering events and sends them to the alert office to say, “Hey, there’s an alarm, there’s an alarm!” That’s a big problem when there’s thousands and thousands of alarms coming in, because the AI or the analytic, the intelligent video, doesn’t understand how to decipher, “Hey, that’s just an animal walking by. It’s not a perpetrator coming up to the fence.”

So with our sensors we’re able to provide 98% detection, tracking, and classification accuracy in 3D spaces. When we marry up with other technologies—such as a PTZ camera—where a camera may be focused into a specific zone our 3D LiDAR sensor sees in that whole space and we detect an object, we tell the camera, “Hey, camera: move over here and keep tracking this object in this space.” Once again, we’re centimeter-level accuracy. We’re able to slew to cue cameras and provide that intelligence to the security operations.

From the flow-management side, which is more on the business-intelligence side, we’re able to provide a higher-level deep understanding of what’s going on within spaces such as retail. We can understand where consumers are going through their journey, what path they’re taking, what products are they touching, the queue lines—how long are they? Even more so, much easier than you could with traditional ways of doing it, like the whole camera base or stereoscopic waves. We’re able to eliminate up to seven cameras with one sensor of ours to be able to give you quite a bit of coverage in that space—once again, in 3D.

So instead of sticking a camera here, here, here and stitching them all together, you put one LiDAR sensor that gives you full 360, and you’re able to see that whole space and see how people interact in these spaces as they’re touching or experiencing different—if it’s at a theme park, to understand what is it a person’s doing in line, or at a museum to see how they’re interacting with this digital space—we’re able to provide so many cool outcomes that you’ve just never been able to do with 2D-sensing technology. So when you asked me what is new and what can you do with 3D, we’ve barely started tapping into the capabilities, what you do with 3D.

Christina Cardoza: Yeah, and everything that you’re saying, obviously that depth of dimension and that 360 view—that is something that’s going to benefit businesses and that you really want to strive for to get the complete picture. But, like you said, it’s not something that we’ve seen businesses really be utilizing until now.

So, what’s been happening in this space? Has there been any recent advancements or innovations that make this a little bit more accessible to these types of businesses?

Gerald Becker: I think the biggest wall to adoption was the ecosystem of technology integrations, right? So as I stated, a lot of these companies have predominantly been going after automotive—the holy grail—and that’s typically been to OEMs—people that take the sensor, develop custom integrations, and stick it into the hood or fender of a vehicle.

Now, that’s not what we’ve done. We’ve pivoted, and we’ve gone after a different market, where we’ve aligned with the who’s who from physical security, integration-management platforms, video management, software solutions, cameras, business intelligence, physical-access control systems where they’ve used our sensors and they’ve integrated our sensors into their platforms to provide all these event-to-action workflows. All these different outcomes that have just not been available in the past, right?

So this is opening a whole new level of understanding and all new capabilities to solve old problems, but even more so, new problems. What we’ve seen is now that we’ve got the integrations to all these tier-one partners in these spaces, is that that’s giving end customers and end users the ability to now explore how to solve old problems in different ways and get higher levels of accuracy that they’ve never been able to do before.

Christina Cardoza: Now, you’ve mentioned a lot of different technologies, and these companies who have been doing 2D sensing with their cameras and their other sensors, how can they now leverage the existing infrastructure that they have and add 3D LiDAR on top of it or work with Quanergy with their existing infrastructure? Or does it take a little bit more investment in hardware and tooling to be able to integrate some of these and get the benefits?

Gerald Becker: Yes and no. As I stated in the previous question, we do have a large ecosystem of technology partners that we’ve integrated with. I would say that 9 times out of 10 off the shelf, we can integrate with a lot of the stuff that’s already out there. But we’re very fluid in how we’re able to work with partners. You can integrate to us directly through your camera, through a VMS platform, through our open API, or third-party GPIO boxes, which is basically nothing more than an Ethernet box where we could push a command directly to it, and then activate a siren alarm or whatever it may be.

So the other side of it too is that we’re not trying to go completely greenfield, right? I’m not trying to discount a lot of the technologies out there, but I will say a layered approach to any approach is probably your best route. Because there’s not a single technology in the world that can solve all use cases. Does someone sell you on that? Please turn around and run, because it just can’t be done. But when you put together the best-of-breed solutions together in your ecosystem or in your deployment, you’re going to get the best outcome from every sensor.

Case in point with cameras. We don’t see like cameras do; we don’t capture any personal identifiable information. When I explain what LiDARs sees, I always revert back to my favorite movie of all time, The Matrix. Remember when Neo saw the ones and zeros dropping from the sky when he saw Agent Smith down the hall? So that’s how we see. We don’t see like cameras do, where I could tell, Christina, you have glasses and a white blouse on, or that I have a black polo shirt on. You can’t see that. To us everything looks like a 3D silhouette with depth and volume in 360.

Now, that’s where we then partner with 2D-imaging technologies, such as a camera like your Boschs, your Axis, your Flares, your Hanwhas—big companies that help us see. When we do need to identify—hey, there’s a bad actor with a black polo that’s potentially going to break through this fence—that camera helps us decipher that. But when you need to actually detect, track, and classify—when you marry those technologies—that’s when you open up new outcomes that you can’t do with just a camera.

So for instance, when you use, let’s say, traditional pan-tilt-zoom auto tracking on a camera that’s embedded, they’ll put a bounty box around the person and they’ll track that person in the scene. The issue is, with traditional 2D technology and auto tracking that’s embedded on the camera, is that when that person goes between an object or another area, the camera doesn’t know what’s happening; it doesn’t see what’s going on in that environment.

But if you have enough of our lasers shooting throughout the space and we’re seeing up and down aisles, halls, parking spaces—whatever that obstruction may be—we’re able to accurately detect the object, and we tell the camera, “Hey camera, stay focused on this wall, because we know the person is behind the wall.” Then when the person comes from behind the wall and into the view of the camera, we’re still telling the camera keep tracking that person: that’s Mr. Bad Guy. So we go from wall to guy with a black shirt on, and we’re tracking him all throughout.

That’s the beautiful thing about the solution too, is that we provide a mesh architecture, right? So, unlike having to stitch multiple technologies and trek from scene to scene to scene to scene to tile, if you have enough LiDARs in a space, as long as the lasers overlap with one another it creates like this massive digital twin. So you could literally zoom in and pan around and trek all throughout, up and down corridors, up and down hallways, other sides of walls, around a tree, around whatever it may be. That’s the power of our mesh architecture is that gives you the flexibility that you’ve just never been able to do with other technologies.

Christina Cardoza: I love this whole idea of partnering with other organizations and experts in this space and being able to get the best outcomes from each sensor and utilize all this technology. But how do you make sure that, now you have this all together, it’s not information overload? That you’re getting data that makes sense and that you can make actions on and do decisions?

Gerald Becker: We’re working with a global data-center company who came to us with a very specific problem. They told us at any given time—well, actually not at any given time—within a 33-week period of time and testing one of their sites, they were generating over 178 alarms—178,000 alarms, to be exact. Now this is by definition needle in the haystack when I tell you only two of those were real alarms. So, when you think of the operation to acknowledge an alarm within a security practice, it’s like: click, next, review. That isn’t it? Delete. Click, next, review, delete. Try doing that 178,000 times to find that one time when that disgruntled employee that got fired for smoking or doing something when he shouldn’t be at the property comes with that USB drive, plugs into the network, and takes down a billion-dollar organization, right?

They knew they had a problem. So, in that respect, they tested everything under the sun from AI, radar, fact-checking technology, underground cable—everything under the sun. So they finally landed on our solution. They did a shootout: one of their best sites with our site—same timeframe of testing their best site with our site. They came up with 22,000 alarms on their best site; our site generated five actual alarms. And—again, I’m getting goosebumps when I tell you this—they told us that has saved them 3,600 hours in pointless investigation work that they can reallocate to other capital expense, other operational expense. “We’re buying more solutions, more CPUs from you guys,” or more LiDARs from us, right? There’s just so much that they’re able to see.

Now, the idea is that we dramatically decrease the operational effect of those legacy technologies to make them only aware of what was important to them, right? So that was a key value proposition there. But even more so, by tying into all those other technologies it made it more effective. So when we did track those five alarms, we did actually track the camera to decipher: is that a good guy, bad guy? Is that a real alarm? Absolutely. So we’re able to decrease the operational expense as far as someone having to click, next, review; click, next, review; click, next, review thousands and thousands of times to actually only work on something that’s important. So there’s so many different outcomes and effects to the positive side that I could go on and on for.

Christina Cardoza: That’s great. And I’m sure when you’re looking at hundreds of thousands of different reviews, you can make mistakes. You’re probably just going through the motions, and something could be happening that you’re just like: all right, click next, click next; I just want to get through all of these alarms and alerts. So that’s great that you guys are able to pinpoint exactly what’s happening.

You’ve talked a lot about infrastructure surveillance, and you talked about the customer behavior within shopping aisles and things like that. I’m curious if you could provide us with—if you have any more customer use cases or examples of how you’ve helped somebody: what the problem that business was having, and what the result was as the result of using the 3D LiDAR and working with Quanergy?

Gerald Becker: We deal with various markets. In fact, one of our bigger markets too is the flow management–smart space-smart city market. We just did a webinar with one of our customers, YVR—Vancouver International Airport—where they talked about their application with LiDAR, and how LiDAR was able to give them the accuracy levels that they needed to—how to better engage the guest journey—that curb-to-gate experience from airside-landside operations. But even more so it’s how to get the flow of people in, through, and out to their final destination.

There’s a lot of bottlenecks, a lot of choke points as you get dropped off by your family, by taxi, or by Uber; as you go to check in, get your ticket; as you go through CATSA or TSA to go through security. Then finally as you go to duty-free or a restaurant to get your food. Then finally when you get to the boarding gates, right? There’s a lot of areas where there’s choke points that create friction as far as the experience and the journey that one takes throughout that environment.

Now, as I mentioned earlier, I don’t want to talk down on other sensing technologies, but let’s just say in this environment we were able to replace up to seven cameras in that environment with one LiDAR sensor. And unlike cameras in that space that had to be overhead looking straight down, giving them a limited field of view, we gave them so much coverage. One of our long-range sensors alone could do 140 meters in diameter of continuous detection, tracking, and classification. That’s equivalent to about three US football fields side by side, right? So that’s quite a bit of coverage you can do.

Now, when you look at it from the TCO advantage that we provide the airports, the data centers, the theme parks, the casinos, the ports—I mean the list goes on and on—is that we dramatically decrease the overall cost in the deployment. So when you would look at it at a high level—I always use this analogy: I used to hear this when I was very young from more senior sales guys—is that whole iceberg theory, right? You can’t look at it at the top of the iceberg and put sensor to sensor what this cost. You know, camera may be only a few hundred while LiDAR may be a few thousand plus software and et cetera, et cetera.

But the underlying cost is beneath the iceberg, right? What’s it going to take to install these seven to eight devices on this side versus one device? You look at labor; you look at cost of conduit, cable, licensing, the maintenance that’s required to deploy that. So that’s when it really becomes really cost effective, when you understand the complexity of installation legacy technology versus new technology in that area. Hence why Vancouver decided to start deploying. They got over 28 sensors in one terminal; they’re expanding to other terminals now. So there’s quite a bit of growth there that we’re doing with that airport, but we’ve got over 22 international airports that we’re currently deployed in.

Now here’s another interesting one as well. So, here in the States, in Florida, there’s a lot of drawbridges that go up and they go down; they go up and they go down. And it’s susceptible to liability issues where people may fall in, vehicles may fall into the waterways, and unfortunately there have been fatalities, which is a horrible thing. So what we’ve done is that they did initial tests with our LiDAR solutions, and they were using LiDAR on both sides of the bridges to basically track if an object comes into the scene—in this case a person or a vehicle. And if that person or a vehicle comes into the scene, hold the bridge from going up and notify the bridge tender in the kiosk and say, “Do not let the bridge up.” Which ultimately would bring down the liability concerns that they had in that area.

Now, with the use of LiDAR and confidently coming out of that POC, very high success, they’re now deploying these across several bridges in Florida. So when you look up at a drawbridge now in Florida, you’ll see our sensors deployed. That’s helping bring down the liability concerns and potential issues of fatalities occurring, or God forbid, a vehicle falling into the waterway, which could happen quite a bit.

Christina Cardoza: Yeah, and I’m sure that not only benefits the operator who’s operating those drawbridges, but also the comfortability of the people driving over those bridges. My husband absolutely hates driving over the bridges and that’s one of his biggest fears. So I’ll have to let him know next time we’re in Florida that he has nothing to worry about. There’s 3D LiDAR, and explain all that. I’ll have him listen to this podcast on the—

Gerald Becker: For sure, for sure.

Christina Cardoza: —drive over there. But I’m curious, because you mentioned this whole ecosystem of partners that you’re able to work with to be able to do all of this stuff. So when you’re talking about some of these examples—and I should mention insight.tech Talk and insight.tech as a whole, we are sponsored by Intel—but I’m curious how partnerships, especially Intel technology and that Intel partnership, how does that help you be successful in these use cases and in these customer examples?

Gerald Becker: Spot on. So let me start off by saying this: unlike the herd of LiDAR that’s heavily focused on GPU processing and they’ve got a ton of data that they need to process, we’re a little bit different. Our sensors are purpose built for flow management and security applications. They don’t need to go into a fender of a vehicle and shoot tons of lasers all over the place and gather and push a ton of data through the pipe as far as throughput requirements for the sensor. Our sensor are purpose built, which means that we have the best angular resolution as far as capturing objects within the space. But ultimately we have a CPU-based architecture, which means it’s more cost effective, it’s highly scalable, but even more so as we align with Intel we provide the best-of-breed solution out there for not only cost, accuracy, and deployment capabilities in the space.

So that’s where we stand apart from a lot of the other Tom, Dick, and Harrys in LiDAR is that it really is a solution you could take off the shelf now and deploy. There is no custom integration you’re going to need to do for six months to a year to get it to where you need to. As I explained earlier, there’s four ways to work with us: at the camera level, at the VMS level, at our API, or through a third-party GPI or Ethernet box.

And then with our partnership with Intel we come to find out new use cases on a daily—I just finished a call with the retail team literally 30 minutes ago where we were exploring brick and mortar and warehouse automation and stuff like that where we could provide 3D sensing beyond the traditional way of looking at those type of spaces with other sensors. So there’s so much to unfold there, but even more so the partnership with Intel makes it valuable for us as we continue to scale and grow in this space,

Christina Cardoza: That’s really exciting, especially with all these different industries you’ve been talking about. We’ve been writing on insight.tech a lot about how they’re using computer vision, AI, and other automated technologies to be able to improve their operations and their efficiencies and workflow, but, excited to see how 3D LiDAR is going to come into the fold and how that’s going to even transform these industries even further.

So I’m curious, since we talked about in the beginning that we’ve really only hit the beginning of the use cases or where we could go with this, how do you anticipate this space to evolve? Are there any emerging trends or technologies that you see coming out that you’re excited about?

Gerald Becker: There’s quite a few use cases already that we’ve tapped, but there’s so much more that’s still yet to be explored, right? So at the very beginning I talked a little bit about orchestration, and we’re able to marry with multiple sensors to create different outcomes. That’s going to continue to grow and expand with additional sensor integrations. So we integrate with the license plate recognition: if there’s a hit, boom, we can then continue to track within a parking lot.

But then there’s an advent of AI: what’s going on with large learning models and all the other stuff that’s coming out. And then cloud, right? So there’s just so much there that just hasn’t been touched. From the AI side there’s a ton of stuff that’s being done right now on computer vision and understanding much more as far as what’s being cut within the scene to understand more generalities that can create different outcomes and tell a different story that ultimately gets you to the end result. Is it good guy? Is it bad guy? Is it good workflow or is it not?

I think that there’s so much more that can be done with LiDAR as we marry with other AI technologies that will provide these additional outcomes that are just not being done yet. So we’re still in very early stages, I would say, for LiDAR in the AI arena, but as it pertains to a lot of the physical-security applications and BI stuff of that, it’s already been proven and deployed globally with quite a few different customers around the world. So, definitely excited about that, but there’s just so much more to peel back as far as what we do with cloud, with AI, that’s really just a massive opportunity in this space.

Christina Cardoza: Yeah, I’m excited to see where else this goes, and I encourage all of our listeners too to follow along as Quanergy leads this space and what else you guys come up with and how else you guys are transforming our industries.

Before we go, Gerald, is there anything else that you wanted to add? Any final thoughts or key takeaways you wanted to leave our listeners with?

Gerald Becker: I’ve always been kind of the guy who always adopts the new platforms once I hear from other people, and I’ll be the last one to create a new social media account, and I’ll wait for what everyone thinks and stuff like that. But I think that with LiDAR or, similarly, some people may be a little nervous adopting new technology, even more so going with something out of their comfort zone—I think now more so than any other time is a time to start testing.

We’re past that early phase, the kick-the-tire phase. There’s so many deployments, so many reference accounts, so many people that are now talking about the value and how this has increased their workflows, that has increased and provided additional value, has decreased the false alarms and operational effectiveness for the—.

I think now more so than ever is a time to act and start testing, start asking the questions: what can LiDAR do for me that I haven’t been able to do before? How can I use LiDAR in my current operations or my current deployments that I have just never been able to see with these other technologies? And look at your existing use cases or your existing business cases and see: if I had depth, if I had volume, if I had centimeter-level accuracy, how could that improve my day-to-day workflow, my job, and provide more value to the organization as a whole?

So I would say, if that’s where you’re at now, reach out to me. You can find me on LinkedIn, Gerald Becker, or reach out to me directly on email, gerald.becker@quanergy.com. I’d love to have a chat with you, even it’s a 10, 15 minute conversation, I’m sure it will lead to a lot more fruitful discussion after that.

Christina Cardoza: Yeah, absolutely. And we’ll make sure to link out to your LinkedIn and accounts for the company, so that if anybody listening wants to get in touch, wants to learn more about this 3D LiDAR space, we’ll make it easy for you guys to access.

So, just want to thank you again, Gerald, for joining us today. And thank you to our listeners. Until next time, this has been the “insight.tech Talk.”

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Data Observability Keeps Fintech Operations Up to Speed

In a world that runs on data, ensuring it reaches the right recipient at the right time is key for businesses of all sizes. To gain optimal network performance and business function, enterprises need to observe, identify, and monitor the health of every flow traversing their infrastructure.

“Data observability is about ensuring the health, reliability, and quality of your data systems, including data pipelines, databases, and data lakes. It involves monitoring data quality, lineage, performance, and usage to proactively identify and resolve issues, so that the data your business relies on is accurate and trustworthy.” says Matt Dangerfield, Chief Technical Officer at Telesoft Technologies Ltd., a global provider of fintech, cybersecurity, and government infrastructure solutions.

It is particularly critical in the world of high-speed financial trading, where terabytes of data continuously flow through technology stacks. Even a single missed data packet could jeopardize a deal worth millions.

“In financial institutions, we’re offering complete data observability to improve end-customer experience, identify network issues, and ensure regulatory compliance and governance. Our offerings seamlessly integrate with existing infrastructure to provide a comprehensive, orchestrated solution.” says Jenna Smith, Head of Product Management at Telesoft.

Solving Fintech Observability Challenges

One of the most significant challenges in fintech is ensuring that data keeps pace with the speed of business.

“A lot of hedge funds and high-frequency algorithmic trading rely on making decisions in nanoseconds,” Dangerfield explains. “The immense volume of data generated by market participants only adds to the challenge. With petabytes of data moving within a 24-hour period, it’s crucial to not only process this data quickly but also extract actionable insights using the right technology.”

Telesoft provides that “right technology”—a comprehensive suite for complete data observability.

To gather network metrics, Telesoft deploys flow probes, which ingest, analyze, and timestamp every packet on the wire, extracting network telemetry about the flow data—including sender, receiver, data volume, and any potential issues, such as dropped packets or delays. The technology monitors and alerts on the detection of microbursts, sudden spikes in network traffic that can overwhelm routers, causing bottlenecks. For fintech entities distributing market data, the probe monitors the sequence of critical data packets, identifying gaps that indicate missing packets. “Every client must receive every packet; missing even one could mean missing out on critical trades,” says Smith.

Telesoft offers a downstream packet capture device that performs full, unsampled recording of the network traffic, which enables customers to fulfil regulatory compliance requirements and provide evidence of fairness in price data delivery. Each data packet is timestamped to establish provenance and provide proof of dispatch. Such records are vital for resolving disputes. For instance, if two clients are disconnected, timestamped data can help financial institutions determine whether the fault lies with the broker, the exchange, or the client. The institutions value this automated data capture for evidence and reporting; it significantly reduces the time their analysts spend on investigations.

#AI and #MachineLearning, are key elements of the observability platform—automatically analyzing, predicting, and alerting on potential #network issues before they occur. @Telesoft_Tech via @insightdottech

To provide a comprehensive level of observability, Telesoft provides a data lake that stores data captured from probes deployed across the network, ingests additional network telemetry such as log files from core infrastructure, and enriches the data with additional context. Having such a data lake facilitates the final layer: AI and machine learning are key elements of the observability platform—automatically analyzing, predicting, and alerting on potential network issues before they occur.

The Telesoft platform runs on the latest Intel CPUs and uses the power of Intel FPGA technology to exceedingly fast and dense solutions. The company’s PCIe Interface cards are designed and manufactured in-house, giving it complete control over the core technology that underpins its products.

Sustainable computing is also a key priority for Telesoft. “We’re helping our customers reduce their data centers’ operational costs and power consumption by collapsing five racks’ worth of financial technology into a single rack through engineering,” Dangerfield says. Intel technology helps make this possible.

Use Cases for Data Observability: Capacity Planning and Customer Experience

Capacity planning is an important task for financial institutions, ensuring that network infrastructure can handle current and future trading volumes while maintaining optimal performance and minimizing downtime. The institute must have confidence that trading surges during market events can be accommodated.

“Bandwidth utilization of each network link is monitored and baselined by our solution. Machine Learning and AI technology tracks this utilization over time and can perform predictive forecasting of expected future throughput requirements, alerting network administrators before the event occurs,” Smith explains. “If a link is becoming saturated with traffic, the addition of microbursts within that traffic can cause network infrastructure to become overwhelmed, buffers to overrun, and ultimately packets to be dropped. Dropped packets can equate to missed trade opportunities for the clients.”

Enabling a financial institution to predict, investigate, and remediate potential network issues before they start improves customer satisfaction and retention, attracting new customers, and driving business in a competitive market.

The Future of AI in Financial Services

Beyond enhancing data observability, Dangerfield is enthusiastic about the “raw power of knowledge” that AI and ML can bring to financial markets. Traditionally, hedging and market futures have been based on educated guesses—how factors like heatwaves and supply chain disruptions will impact prices. But AI and ML add a layer of intelligence, identifying patterns in data that lead to more accurate forecasts.

No matter what the future holds for AI in financial services, its foundation will be built on data observability. “Ensuring robust observability keeps the technology infrastructure running smoothly, which is exactly what high-stakes fintech markets demand,” says Smith.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Retail AI Unlocks the Power of Visual Data

Cameras are everywhere in shops, department stores, and other retail locations—offering retailers a potential gold mine of visual data to use in AI applications. By feeding camera data into computer vision-powered analytics systems, retail businesses can optimize operations, gain deeper insights into their customers, and make better strategic decisions.

“The use cases in retail are truly extensive,” says Pranita Palekar, CEO and Co-Founder at Aurify, an AI and video analytics systems specialist for retail and other sectors. “Computer vision can be used to create detailed profiles of customers, analyze in-store shopper behavior, manage employees, monitor shelves, prevent loss, and support smart digital signage.”

Of course, it can be challenging to analyze and operationalize massive amounts of raw video data, especially if real-time processing or extensive, multi-location deployments are required. But powerful edge devices and mature AI model deployment tools make it possible to get computer vision solutions into stores—and innovative retailers already take advantage of the opportunity.

In-Store Video Analytics Deliver Business Outcomes

Take, for example, two Aurify retail business implementations in India.

In one deployment, a leading shop-in-shop retailer with 150+ locations wanted to gain greater insight into its customers to improve marketing and sales efforts. Leadership believed that current business processes were inaccurate and inefficient due to reliance on staff members manually counting and observing shoppers—but they worried that a high-tech alternative might be cost-prohibitive.

In a second implementation, a nationwide fashion chain faced similar challenges on an even larger scale. The management team was concerned that it lacked centralized visibility into its network of 700+ stores, resulting in an inability to make timely, data-driven decisions about operational and sales strategies.

Aurify developed customized solutions for both companies based on its StoreScript AI video analytics platform for retail. Existing CCTV infrastructure was used to collect data, which was then analyzed to provide a clear picture of customer demographics and real-time foot traffic in stores. At the fashion retailer, point-of-sale (POS) monitoring and queue management were included to help streamline operations, manage inventory, and gain additional insights into customer buying behaviors. The newly available data led to a major shift in merchandising strategy, resulting in significant sales growth.

For both businesses, the result was a completely automated video analytics system that eliminated cumbersome manual processes and delivered the desired insights—all with minimal capital expenditure.

Other customers also benefited by implementing use cases such as: calculating dwell time compared to conversions, group counting, heat map generation, operational hours auto-tracking, and tracking the number of customers in the store at any given time.

In the cost-sensitive #retail sector, decision-makers are always on the lookout for solutions that can be implemented quickly and efficiently. @AurifySystems via @insightdottech

Flexible Tech Stack Means Retrofits, Not Rip-and-Replace

In the cost-sensitive retail sector, decision-makers are always on the lookout for solutions that can be implemented quickly and efficiently. For this reason, a flexible computing platform is key. The Aurify StoreScript solution is camera brand agnostic and can also use video data from different camera types like HD or IP. In this regard, the company’s technology partnership with Intel has been crucial.

“Our AI video analytics solutions are based on Intel processors, which deliver excellent performance and stability for computer vision at the edge workloads,” says Rishi Palekar, Managing Director and Co-Founder. “This allows us to use raw camera data from existing CCTV sources, minimizing hardware costs for our customers and speeding deployment.”

Aurify uses the Intel® OpenVINO toolkit extensively to optimize, customize, and deploy deep learning models from the edge to the cloud. This enables StoreScript to be adapted to multiple use cases—and support on-premises, cloud, or hybrid deployment models.

The upshot is that retailers can implement and customize an in-store AI analytics solution to suit their unique needs—without having to make massive investments in new infrastructure.

Beyond Retail: AI Video Analytics in Diverse Sectors

The flexibility of retail AI platforms will no doubt make them attractive to numerous retail businesses. But it also means that these solutions can be adapted to other sectors as well.

Aurify is already developing video analytics solutions for a wide variety of industries. In manufacturing, they can be used to amplify video data from cameras trained on industrial equipment to detect abnormal vibrations and enable predictive maintenance. Building and construction firms can use AI video analytics to perform automated quality control, ensure that workers follow proper safety procedures, and detect hazards and accidents in real time. And smart cities can use computer vision at the edge for traffic management, public safety, and critical infrastructure monitoring.

“Advances in AI, on both the software and the processing side, make it possible for all kinds of industries to deploy robust, scalable computer vision solutions,” says Palekar. “This will unlock tangible benefits for enterprises in many sectors in the form of increased growth, reduced spending, and greater profitability.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Going Green: Occupant Count-Based Demand Control Ventilation

Imagine trying to plan a dinner party, but not knowing how many guests will arrive. It would be impossible to prepare for. This is the daily challenge faced by facilities managers implementing demand control ventilation (DCV) without access to real-time occupant-count (RTO) data.

Demand control ventilation optimizes HVAC usage for energy efficiency based on a building’s current air quality and temperature needs. But older approaches to DCV usually rely on CO2 sensors—which lack the granularity of data needed for optimum efficiency.

“CO2-based systems can determine when a space is in ‘occupied mode,’ meaning that someone is in the room, but they don’t have access to real-time occupant count data,” explains David Whalley, co-founder and CEO of Feedback Solutions, an energy efficiency specialist that offers occupant count-based DCV solutions. “Because of this, they’re forced to default to a level of ventilation that approaches 100% of system capacity, even if there are only a handful of people present. For example, we have worked with clients with office areas built for 4,000 employees that at times have occupant levels dip to below 300 people.”

This kind of over-ventilation is costly. But it also hampers greenhouse gas (GHG) reduction efforts at universities, government facilities, commercial office buildings, and other large-scale venues where sustainability is a high priority—and in many cases, a compliance requirement.

But now flexible edge computing platforms enable occupant count-based demand control ventilation solutions. Far more effective than older systems that use CO2 sensors, occupant count-based DCV is already achieving impressive real-world results. The increased energy efficiency is a result of the decreased kWh required by the ventilation fans along with thermal savings driven by the reduced amount of outside air required to be heated or cooled.

Realizing Green Building Benefits

Case in point is Feedback Solutions deployment with the New York State Energy Research and Development Authority (NYSERDA) at New York University (NYU).

Globally, building operations account for 28% of GHG emissions, but in New York, where extremes of temperature and older facilities are common, that number is even higher. Both the State of New York and NYU’s leadership were understandably concerned about energy efficiency on campus.

Far more effective than older systems that use CO2 sensors, occupant count-based DCV is achieving impressive real-world results. Feedback Solutions via @insightdottech

Working with NYU, Feedback Solutions engineers installed an occupant count-based DCV system in the College of Dentistry at the Washington Square campus in Manhattan. People-counting sensors monitored the exact occupant counts of lecture halls and other large rooms in real time, with the data processed at the edge using Feedback Solutions software running on an Intel device. The resulting information was then sent to the university’s building management system (BMS) via the Building Automation Control Network (BACnet) protocol so the level of ventilation could be adjusted automatically based on actual current demand.

This resulted in a significantly more efficient HVAC strategy compared to the previous solution, which required ventilation equipment to run at more than 80% capacity when the system was in occupied mode. The new occupant count-based DCV solution was able to maintain air quality and temperature set points in lightly used rooms or rooms with fluctuating occupant levels at as little as an average of 30-40% capacity, resulting in an overall GHG emission reduction of 18%. Feedback’s system enables ventilation rates to go up and down with the population of an HVAC zone automatically in the background without manual intervention.

The university was so pleased with the results that it rolled out the technology to 15 other buildings on campus—a decision made even easier by financial incentives the new solution delivered. Beyond direct OPEX savings, NYU also offset penalties related to New York Local Law 97, a municipal sustainable-building mandate. In addition, the university was able to reduce its payback period on its investment by taking advantage of the incentive programs implemented by local power companies.

“Local utility providers in New York and many other sustainability-focused regions offer some extremely generous green-building incentives,” says Whalley. “Those incentives can cut an already fast three-year payback in half.”

Future-Proofing Sustainable Buildings

Ability to convert aging infrastructure into something greener attracts large organizations and governments around the world. It’s also of great interest to utility providers that need to find ways to reduce the load on existing building stock to enable electrification of new construction.

But a major stumbling block when undertaking retrofits is that each facility will have its own existing BMS solution—as well as its unique needs and concerns. To address this challenge, solutions providers turn to flexible designs that can be adapted to different kinds of buildings.

Feedback Solutions, for example, offers a sensor- and BMS-agnostic software platform. If an end user has unique sensor requirements, or runs multiple BMS solutions in its IT environment, it’s still straightforward to help the building operator and energy team implement a DCV system that will suit their needs (Figure 1).

Chart of Feedback Solution’s demand control ventilation architecture
Figure 1. A flexible DCV solution architecture that is sensor- and BMS-agnostic, designed to optimize energy consumption while enhancing occupant comfort. (Source: Feedback Solutions)

The company’s technology partnership with Intel has played an integral role in developing such a versatile DCV platform.

“The Intel edge device we use is powerful and highly flexible,” says Whalley. “It allows us to offer many different configurations for our end users, from architectures that send all usage data to the cloud to solutions that are entirely on-premises.”

Creating a Holistic Data Analytics Strategy

Occupant count-based DCV systems are crucial in their own right. But access to real-time occupant data from buildings has far-reaching implications, making these solutions part of a much larger story.

When organizations know how their spaces are being used, tremendous value can be unlocked. Offices and universities can rationalize their post-Covid real estate footprints. Facility management teams can schedule more effectively, reducing operating hours at underused buildings and allocating cleaning and maintenance staff more logically. In the long term, it’s possible to make data-driven decisions about repurposing or consolidating buildings based on actual usage patterns.

This isn’t just a case of adding value in disparate areas. When building occupancy data is treated as a common fabric, it enables an integrated approach to solving some of the toughest problems of the coming decades.

“By breaking down data silos, it’s going to be possible to implement far more sophisticated optimization strategies,” says Whalley. “We see ourselves as part of a future in which the world meets its energy efficiency, space utilization, and sustainability challenges through holistic solutions.”

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Bosch Digital Twin Industries: Advancing Industrial AI

Manufacturers, energy companies, and other enterprises dependent on heavy equipment do everything they can to keep their expensive machines up and running. Many would like to use IoT and predictive AI analytics to ward off trouble before it leads to more serious problems.

But applying AI analytics to industrial equipment is not easy to do. Large businesses have thousands of sensors in machines operating in plants across the globe, all of them rapidly generating performance data in a dozen different formats. Just collecting this information can be a nightmare, and it is often filled with errors, omissions, and inconsistencies. Predictive-analytics models must have reliable data to produce good results. If the data is wrong, incomplete, or too slow to arrive, the models may fail—leading to costly breakdowns.

Modern digital-twin solutions can overcome these problems, quickly cleaning and validating machine data before subjecting it to AI analysis. Digital twins can provide companies an accurate dashboard replica of machine operations everywhere—and send alerts that help solve problems before they get out of hand.

#DigitalTwins can provide companies an accurate dashboard replica of #machine operations everywhere—and send alerts that help solve problems before they get out of hand. @prescientPDI via @insightdottech

Harnessing Machine Data for AI Predictive Maintenance

An industrial digital twin requires several technologies to function together like clockwork. To help one of its manufacturing customers predict machine behavior, German engineering technology company Bosch GmbH began working on a digital twin solution, using its expertise in industrial machinery to create AI algorithms that can spot significant deviations in pressure, temperature, vibration, and other important metrics.

But Bosch Digital Twin Industries realized it needed help to develop another crucial part of the solution—corralling the company’s vast array of machine sensor data and preparing it for AI use. A customer recommendation led Bosch to Prescient Devices, Inc., a firm that specializes in data engineering and IoT solutions.

“AI is critically dependent on data quality—if it’s bad, the AI outcome is going to be bad,” says Prescient Devices CEO Andy Wang.

Industrial data is notoriously challenging to manage, in part because the large quantity of machines and sensors provide more opportunities for error. Sensors can become disconnected or turn off unexpectedly, or a network can go down, creating information gaps that give AI algorithms a false picture of operations. And faulty sensors sometimes send duplicate data.

“You have to correct for these problems for the data quality to be high,” Wang says. “And the corrections must be accomplished quickly, on large data sets that follow different protocols and are transmitted at high speed. Our platform supports a very high-speed data rate. We were able to collect the customer’s high-speed sensor data, clean it, format it, and deliver it to Bosch in time to meet their time-to-market.”

The two companies continued honing the solution, which was integrated into the Bosch Digital Twin IAPM, or integrated asset performance management system. It is now used by companies in many industries to monitor machines made by Bosch and other manufacturers. Access to timely, accurate machine data enables industrial businesses to stop potential problems before they happen.

“The data may tell you there’s a small machine component that’s getting old and not working properly. You could solve the problem by replacing it for $1,000,” Wang says. But without such advance knowledge, the degrading part could lead to a cascading set of failures.

For example, if the component gets burned through, it can damage the next component, which can damage a bigger component. Eventually the engine can get damaged. When a million-dollar machine goes bust, it can cost thousands or millions of dollars to fix.

Worse yet, a defective machine can cause the entire production line to shut down, costing factories enormous amounts of time and money. “If machines go down unexpectedly, they can take multiple days to fix. With predictive AI analytics, managers can fix them during preplanned maintenance windows, so the production line would never go down,” Wang says.

Implementing Digital Twins for Manufacturing

Companies can obtain the Bosch Digital Twin IAPM by purchasing a starter kit containing sensors, an on-premises industrial PC, and a sensor master to transfer the sensor data to the computer for processing before it is sent to the Bosch cloud for AI analysis.

Prescient’s software is installed on the Intel-powered computer to automatically recognize different sensor types and quickly clean and validate their incoming data. Intel is known for its reliable and long-lifetime processors—a key value for businesses with equipment in remote areas.

“For example, one of Bosch’s customers is an oil and gas pipeline company with computers deployed in locations that are difficult to access. Technicians have to apply for permission to enter and schedule appointments weeks in advance,” Wang says.

The Digital Twin IAPM also allows companies to reduce the amount of data they send to the cloud, transferring only the kinds of information they deem useful. That eases cloud data ingestion problems and saves money.

For companies that prefer not to use the cloud, a newer version of the solution—the Bosch IAPM Digital Twin in-a-box—is like having a data center at the edge. It runs the Bosch AI model on-premises, using a high-performance computer that contains both Intel CPU and GPU processors for advanced AI analytics.

“Many companies do not want to ship their data to the cloud for security and privacy reasons, and running AI directly at the plant is also less expensive. This solution is gaining a lot of traction from customers across the globe,” Wang says.

Prescient’s software can also save money—and time—for builders of AI-enabled machines. “The majority of data scientists spend only about 20% of their time building and working with AI models. They spend the other 80% preparing data to go into the models,” Wang says. “We have the technology to prepare data very quickly, speeding their production of AI solutions.”

Improving Operations with Industrial AI

Whether they operate in the cloud or at the plant, industrial digital twins create an indelible record of machine performance. By analyzing this information, companies can adjust machine settings to changing conditions and make other tweaks to optimize their processes. Historical data can also help them predict spending on equipment and repairs, and make informed decisions about vendors and service providers. These capabilities can give companies an important competitive advantage, Wang believes.

“I predict that in the near future, every company that has large, expensive physical assets will be using a digital twin solution,” says Wang.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.