No-Code AI Platform Drives Mining Safety

Preventing incidents in a mining environment involves many moving parts. First there’s the sensory overload: Drilling and ore hauling operations are loud and underground operations are often dimly lit. Different kinds of vehicles move around at varying speeds, there are no traffic lights, and it’s difficult to gain a comprehensive view of the surroundings. Coupled with long hours on constantly changing shifts, the conditions are ripe for worker safety to be compromised.

Fortunately, the mining industry can address the challenge in hazardous environments, whether above or below ground, with computer vision AI solutions. Kelvin Aongola, CEO and Founder of LabelFuse, a no-code platform for machine learning and computer vision solutions, says incident prevention is a high priority in the mining industry, but there is no standardized way of approaching the challenge.

“Companies are looking for cost-effective ways to address the problem,” Aongola says. Our Advanced Driver Assistance System (ADAS)—designed for the mining and long-distance trucking sectors—essentially serves as an incident prevention platform that leverages existing CCTV cameras to capture an accurate picture of driver fatigue and working conditions on the ground.

Computer Vision AI Detects Fatigue

In the mining use case, the environment is very loud with large vehicles surrounded by smaller ones. “If you’re all the way on top as a driver, your view can be completely blocked,” Aongola says. Accident prevention involves a number of traditional methods just to keep the driver awake.

The computer vision solution captures visual cues of tiredness—droopy eyes, blinking—that might be easy for humans to miss, and sends prompts to the driver. The program also places the driver in context, understanding what’s happening in the environment around the vehicle to better predict the possibility of an adverse outcome. “We also stream these activities to a control center so if the driver has ignored all alerts, then the control center can take charge,” Aongola says. The data can also help verify insurance claims in case of incidents.

Given that the AI algorithms scan the human face for signs of fatigue and distraction, privacy concerns understandably surface. But LabelFuse follows data privacy legislation and does not store personal data on the cloud where chances of compromise might be higher, Aongola says. The company also stores only metadata on-prem for not more than a few months.

While incident prevention is the current use case in the mining industry, the LabelFuse solution is equipped to lift a bigger load, Aongola says. The system can work well with ADAS and expand to autonomous driving use cases in the future. “There are possibilities to go beyond what we’re offering with the current setup,” Aongola says.

“While incident prevention is the current use case in the #mining industry, the LabelFuse solution is equipped to lift a bigger load” – Kelvin Aogola, LabelFuse via @insightdottech

The Desire for No-Code Solutions

Companies that don’t have the right AI expertise struggle with implementation. “If you see how computer vision is deployed, especially at the edge, most companies do a small proof of concept, but they are challenged to scale it up as a production-ready solution,” Aongola says. “They either struggle with fine-tuning their models or in figuring out how to use the right edge device to deploy their ideas.”

Enterprises that want to deploy AI-driven solutions are keen to work with no-code solutions so they can focus on their primary value proposition without becoming AI-first companies. No-code solutions democratize access to software because they enable even those without specialized programming skills to develop workable solutions for problems. Pre-built components and drag-and-drop functionality enable professionals to build capabilities without getting mired deep in programming fundamentals.

LabelFuse fills this need through its no-code platform that allows domain experts to simply log in and pick a model specific to a business’s operational needs.

The Intel Advantage for Edge Computing

LabelFuse relies on Intel technology for a number of reasons, including a reasonable cost. “When you’re speaking to your client, it’s easier to close that deal because the price point doesn’t require them to go through a complicated approval process; they can make a decision right then and there,” Aongola says.

Storing data in the cloud is challenging, so high-powered edge processing helps cut costs and latency. Powered by 13th Generation Intel® Core processors, the Intel® NUC delivers all the performant compute needed. The device’s compact form factor and easy installation make it a great fit for vehicles with tight spaces. And the NUC can be placed in a ruggedized enclosure for mining’s harsh environments. The well-recognized brand name is another significant plus factor, Aongola says, as the “technology has been validated, you’re not using a no-name device to help solve a problem.”

Wider Adoption of Computer Vision AI

Although LabelFuse has found ready implementation of its incident prevention platform in mining, use cases extend beyond the sector. Any industry where worker attention might flag due to busy environments such as manufacturing, field services, or retail, can benefit from these computer vision AI solutions.

The way computer vision works is changing, Aongola says. People want solutions you can talk to, like ChatGPT equivalents for visual data. LabelFuse integrates such generative AI into edge offerings and already sees significant traction in that domain.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Bringing Digital Experiences to the Physical World

Physical workspaces—once defined by static layouts, fixed equipment, and limited adaptability—are undergoing a transformative shift. As remote work becomes more integrated, the concept of the workplace is expanding beyond physical office walls to embrace online modes, creating seamless digital experiences within hybrid environments.

This shift toward inclusivity and flexibility necessitates a reimagining of traditional workflows and communication channels. Leading this paradigm shift is Q-SYS, a division of QSC, a provider of advanced audio, video, and control systems. The company is leading the concept of “high-impact spaces,” engineered not just for their physical attributes but for their potential to enhance collaboration and productivity.

Christopher Jaynes, Senior Vice President of Software Technologies at Q-SYS, explains: “It’s all focused on the outcome of the space. Previously, we talked about spaces in terms of their physical dimensions—like a huddle room or a conference room. Today, what’s more significant is the intended impact of these collaborative spaces. High-impact spaces are designed with this goal in mind, aiming to transform how we interact and collaborate in our work environments.” (Video 1)

Video 1. Christopher Jaynes from Q-SYS explains the importance of high-impact spaces in collaborative and hybrid environments. (Source: insight.tech)

Redefining Hybrid Environments

Q-SYS has developed a sophisticated suite of technologies, including the Q-SYS VisionSuite, to create high-impact spaces that transform meeting rooms and collaborative spaces. This suite incorporates sophisticated tools like template-based configurations, biometrics, and kinesthetic sensors to significantly improve user interaction and engagement within these spaces.

Leveraging the power of AI computer vision technology, the Q-SYS VisionSuite equips these high-impact spaces with advanced control systems capable of anticipating and adapting to the needs of participants. This adaptive technology provides personalized updates and interactions, tailored to the dynamics of each meeting.

“AI in these spaces includes computer vision, real-time audio processing, sophisticated control and actuation systems, and even kinematics and robotics,” says Jaynes.

Historically, such advanced interactions were deemed too complex and prohibitively expensive within the AV industry. Outfitting a space with these technologies could ratchet expenses up by as much as $500,000. Today, AI has upended the cost calculations. “With AI control systems and generative models, we have democratized these capabilities, significantly reducing costs and making sophisticated hybrid meeting environments accessible to a broader range of users,” says Jaynes.

Technology Powering Collaborative Spaces

Audio AI plays a starring role in high-impact, collaborative spaces. AI can not only identify speakers and automatically transcribe their dialogues but also adjust the room’s acoustics depending on the type of meeting.

A standout feature of Q-SYS is its multi-zone audio capability. This ensures that clear, crisp sound reaches every participant, regardless of whether they are in a physical or hybrid environment.

The system can also enhance the meeting’s dynamics to ensure that when a remote attendee speaks from a particular direction, the sound emanates from that same location within the room. This directional audio feature creates an immersive experience, mirroring the natural flow of a face-to-face meeting and focusing attention on the speaker.

“Today, what’s more significant is the intended impact of these collaborative spaces. High-impact spaces are designed with this goal in mind.” @QSYS_AVC via @insightdottech

Additionally, as the name implies, the VisionSuite leverages advanced computer vision. Here, it offers a multi-camera director experience, which automatically controls cameras and other sensory inputs to enrich the collaborative environment. This ensures that video distribution is handled intelligently, maintaining engagement by smoothly transitioning focus between speakers and presentations.

In the meeting space, equipped with multiple cameras, the system uses proximity sensors to detect when a participant unmutes to speak. The cameras then automatically focus on the active speaker to enhance the clarity and impact of their contribution.

The system also extends out to intuitive visual cues as well. For instance, ambient room lights turn red when microphones are muted and switch to green when the microphones are active.

For added security and privacy, the cameras automatically turn away from the participants and face the walls whenever video is turned off. This ensures that privacy is maintained, reinforcing security without manual intervention.

Another element is room automation, which significantly enhances the functionality and adaptability of workspaces. AI systems can intelligently adjust lighting and temperature settings, allowing these spaces to effortlessly transform to accommodate everything from intimate brainstorming sessions to extensive presentations.

Room automation AI can even help workers manage busy schedules. “Imagine you were running late to a meeting,” suggests Jaynes. “The AI, already aware of your delay, would greet you at the door, inform you that the meeting has been in session for 10 minutes, and direct you to available seating. To further enhance your integration into the meeting, it would automatically send an email summary of what has occurred prior to your arrival, enabling you to quickly engage and contribute effectively.”

Standardized Hardware Drives Digital Experiences

To make all this possible, Q-SYS leverages the robust capabilities of Intel® processors. “Q-SYS is built on the power of Intel processing, which allows us to build flexible AV systems and leverage advanced AI algorithms,” explains Jaynes.

This strategic use of Intel processors circumvents the constraints of the specialized hardware associated with traditional AV equipment. The Q-SYS approach is heavily software-driven, allowing standardized hardware to flexibly adapt to a variety of functions—providing a longer hardware lifecycle.

“It’s exciting for us, for sure; it’s a great partnership. We align our roadmaps to ensure that we can deliver the right software updates on these platforms efficiently,” Jaynes adds.

As we move toward a future where collaborative spaces and hybrid environments are increasingly defined by their adaptability and responsiveness, Jaynes believes AI is poised to reshape the way we interact and communicate in professional settings. With solutions like Q-SYS, these interactions will be more inclusive, engaging, and effective—and, quite possibly, enjoyable.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Unlock Customer-Facing Edge AI with Workload Consolidation

The way consumers and businesses interact today has changed. “In the post-pandemic era, there is an emphasis on minimizing physical contact and streamlining customer service,” explains Jarry Chang, General Manager of Product Center at DFI, a global leader in embedded motherboards and industrial computers.

As a result, there has been growing demand for integration of edge AI applications in the retail space. For instance, AI-powered self-service kiosks and check-in solutions can help reduce physical interactions and wait times by allowing customers to complete transactions on their own. These solutions can also analyze customer behavior and preferences in real time, allowing retailers to offer personalized experiences that enhance customer satisfaction and loyalty while driving up sales.

“These requirements are driving a shift towards edge AI, where processing occurs closer to the data source, reducing latency and enhancing privacy,” says Chang. “This change is driven by the need for real-time decision-making and the growing volume of data generated at the edge.”

Spurring AI Evolution at the Edge

But the problem is that businesses often struggle to find the best approach to deploying edge AI applications around their existing infrastructure and processes.

While edge AI can dramatically reduce the load on networks and data centers, it also can create new burdens locally, where resources are already constrained. The question arises: How can edge AI be deployed without adding costs and complexity?

Workload consolidation is one way these challenges can be addressed—by enabling a single hardware platform to incorporate AI alongside other functionality. The result is multifunction edge devices “capable of running multiple concurrent workloads with limited resources through features such as resource partitioning, isolation, and remote management,” Chang explains.

DFI recently showcased the possibilities of workload consolidation at embedded world 2024 with a demo that combined an EV charger with an informational kiosk (Video 1). The kiosk element used biometrics, speech recognition, and an integrated chatbot to recommend nearby shopping and dining opportunities that drivers could enjoy while their vehicle recharges. Once the driver walks away, the screen launches into a digital signage mode, displaying enticing advertising for nearby businesses.

Video 1. DFI showcases the possibilities of workload consolidation at embedded world 2024. (Source: insight.tech)

The DFI RPS630 Industrial motherboard leverages hardware virtualization support in 13th Gen Intel® Core™ processors to seamlessly consolidate AI functions alongside a content management system, EV charger controls, and payment processing. Meanwhile, an Intel® Arc™ GPU is used to provide power- and cost-efficient acceleration for AI components.

DFI also uses the Intel® OpenVINO™ toolkit for GPU optimization to reduce its AI memory footprint, allowing it to run complex large language models in less than 6 GB of memory. Moreover, by offloading complex AI tasks at the edge to the Intel Arc GPU, DFI was able to support multiple AI workloads while simultaneously reducing response time by 66%.

“These #EdgeAI use cases will all require workload consolidation platforms to enable real-time processing of customer #data and efficient operations” – Jarry Chang, @DFI_Embedded via @insightdottech

Charging into the Future of Intelligent Systems

DFI’s workload consolidation technology extends well beyond EV charging applications. The platform integrates its industrial-grade products with software and AI solutions from partners—targeting the global self-service industry for applications in retail, healthcare, transportation, smart factory, hospitality, and beyond.

Through the integration of a hypervisor virtual machine, DFI consolidated all the client’s workloads onto a single industrial PC. This system supports diverse resourcing, enabling various OS platforms to function concurrently.

“These edge AI use cases will all require workload consolidation platforms to enable real-time processing of customer data and efficient operations,” says Chang. “And as more industries and organizations adopt the technology, we expect to see another evolution.”

“The integration of edge AI with workload consolidation platforms is crucial in the deeper development of edge computing,” he continues. “There is no doubt in my mind that as hardware, software, and other technology around edge AI continue to develop, workload consolidation will become more mainstream—ultimately unlocking the next generation of intelligent edge computing applications.”

The Value of Collaboration at the Edge

Edge AI represents an immense opportunity for many industries. Chang explains that so far, we’ve really just started to scratch the surface. By pairing efficient acceleration with the right workload consolidation platform, we can start to explore what the technology can really achieve.

DFI’s partnership with Intel gives an insight into what’s necessary to support this continued advancement: collaboration. Modern edge AI applications demand a multidisciplinary approach that combines hardware, software, AI, and industry expertise.

“Embedded virtualization requires strong partnerships in hardware and software,” explains Chang. “Developing and deploying workload consolidation technology demands significant research and development resources. By partnering with other companies such as virtual integration software vendors, we can significantly reduce both development time and time-to-market.”

“And through strong partnerships such as what DFI has with Intel, we’re able to explore and develop new technologies that help define the future of edge computing,” he concludes. “We’re proud of what we’ve achieved together so far. And we’re enthusiastic at the prospect of further collaboration with Intel on workload consolidation, AI, and a great deal more.”

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Edge AI Detects Driver Distractions, Improves Safety

Every driver knows how hard it is to keep their eyes on the road when tired—and how easy it is to become distracted by a text message, radio dial, or cup of steaming hot coffee. For professionals, who spend far more hours behind the wheel than the rest of us, staying focused while driving is even more of a challenge.

But now, emerging Advanced Driver Assistance Systems (ADAS) based on edge AI and computer vision help solve the problem of fatigued and distracted driving in ways that traditional solutions cannot. That’s good news for everyone—and a relief for fleet management, logistics, and ride-hailing businesses.

“Distracted and fatigued driving are major concerns for enterprise safety officers,” says Srini Chilukuri, Founder and CEO of TensorGo Software Pvt Ltd., a platform-as-a-service provider focused on computer vision and deep learning solutions. “ADAS solutions use edge AI to improve on older safety systems, offering real-time monitoring, analysis, and alerts to help drivers to focus.”

And while deploying AI solutions at the edge is challenging, partnerships between computer vision specialists and hardware manufacturers help get these innovative systems into commercial vehicles and on the road.

Edge AI on a Raspberry Pi

Case in point is TensorGo’s work with Intel on its Advanced Driver Attention Metrics (ADAMS) solution. The ADAS system design is elegantly straightforward: It comprises a compact camera, an edge computing device, and computer vision algorithms that monitor for risky driving.

ADAMS runs three separate AI behavioral detection algorithms concurrently:

  • Drowsiness detection analyzes the driver’s face for signs of sleepiness, such as frequent yawning or closing eyes.
  • Head pose picks up on distracted driving by identifying instances of drivers looking away from the road, such as adjusting the navigation system or reaching for a dropped item.
  • Object detection spots when a person is glancing at a distraction such as a cell phone.

If any of the algorithms detect a problem, the system immediately alerts the driver via their mobile device and then sends a second alert to a company safety official as well.

Although the basic system architecture was established in the product development phase, bringing a working version of ADAMS to market presented challenges. The proof-of-concept ran on a bulky edge device that ultimately proved too inefficient and inflexible to turn into a viable product. TensorGo’s engineers wanted to migrate their system to a compact and energy-efficient 32-bit Raspberry Pi edge device and a Raspberry Pi camera. But it wasn’t clear how it would be possible to run multiple AI algorithms on a smaller edge device without overtaxing the processor.

Working with Intel, the TensorGo team overcame their engineering challenges. They used the Intel® OpenVINO™ toolkit to optimize and accelerate the AI algorithms to run efficiently on the compact Raspberry Pi device. Intel architects also suggested a strategy of processing fewer frames of camera video data than in the original prototype. This approach provided more than enough data for high-precision computer vision analysis—while also reducing the burden on the processor, thus improving ADAMS’ overall performance and stability.

Case Study Shows Improved Safety—and Cost Savings

TensorGo’s deployment with a large trucking and delivery company with operations in the Middle East demonstrates the capabilities of ADAS systems in real-world scenarios.

The company was facing an increasing number of accidents across their fleet of more than 500 trucks—with driver distraction and fatigue being identified as the main cause. Management could not accept the safety risk to drivers and the general public. They were also concerned about operational efficiency issues due to vehicle downtime and liability costs. Despite implementing driver training programs, the problem persisted.

Working with TensorGo, the company deployed an ADAMS system in every vehicle in their fleet. Within six months, the results were conclusive—the edge AI approach was a resounding success. The company saw a 32% reduction in distraction-related incidents and a 27% decrease in fatigue-related accidents. The driver attention system had also helped improve on-time delivery rates by 18%, leading to an estimated cost savings of more than $1.5 million.

“ADAS systems like ADAMS are a game changer for enterprise safety officials,” says Chilukuri. “They improve safety outcomes and positively impact the bottom line, solving key safety challenges and helping to overcome adoption barriers.”

C2T—By combining powerful #safety and cost savings benefits, #ADAS solutions are an attractive option for #FleetManagement companies. TensorGo via @insightdottech

The Future of Transportation Safety and Beyond

By combining powerful safety and cost savings benefits, ADAS solutions are an attractive option for fleet management companies, leading to an increased uptake of these systems over the coming years.

TensorGo is preparing for this future with plans to introduce more features to its existing solution. The company is looking at ways to add a GSM module to ADAMS so that alerts can be emitted directly from the edge device rather than the driver’s phone. The engineering team is also exploring how to incorporate AI collision detection models into their solution to alert drivers to potential road hazards.

Beyond ADAS systems, the solution’s underlying technology can support other use cases. The core software and computer vision technology used in ADAMS can be adapted to applications including workplace safety, assisted living monitoring, and industrial operations.

“AI and computer vision at the edge will play a transformative role in, logistics, and other sectors over the coming years,” says Chilukuri. “Real-time monitoring and analysis will improve safety and efficiency across the board, and we aim to be a key player in that transformation.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech

Digitizing Physical Retail with Autonomous Satellite Stores

Physical retailers have long lagged behind their digital counterparts in areas like customer experience and cost efficiency. Where e-commerce can offer personalized experiences, frictionless purchasing, and automated inventory management, physical stores often struggle with inefficient manual processes as well as disconnected customer experiences.

But thanks to today’s advanced AI and computer vision capabilities, physical retail is now becoming more digital. For example, autonomous retail store solution provider Cloudpick bridges the gap between physical and digital retail with its autonomous satellite stores, offering a cashier-less micro-retail experience.
“Retailers are eager to bring their online business intelligence into the offline world, but traditional store formats with high rental costs and inflexible layouts make that very difficult,” explains Mark Perry, Cloudpick’s Head of International Business Development. “Our satellite stores let them expand sales channels at minimal risk.”

Cloudpick bridges the gap between physical and #digital #retail with its #autonomous satellite stores, offering a cashier-less micro-retail experience. @CloudpickTech via @insightdottech

Expanding Retail’s Reach with Satellite Stores

Unlike traditional brick-and-mortar stores or supermarkets, Cloudpick’s satellite stores focus on the “micro market,” meaning they are small, flexible, and movable. Therefore, these AI-powered micro-markets allow brands to tap into the burgeoning “micro-retail” or “pop-up” trend in a cost-effective manner.

These tiny stores are gaining popularity because they can be deployed in unconventional locations like corporate office lobbies, hotel entrances, and university campuses. This introduces new potential revenue streams for retailers in high-footfall areas while providing convenience to customers, according to Perry.

But finding a good location for these small stores can be a risky process. Despite their diminutive dimensions, setting up a traditional pop-up store is an expensive, time-consuming process—one that often suffers from unexpected delays and costs. Worse, if sales are disappointing, relocating the store can be difficult if not impossible. For example, traditional retail setups require lengthy leases and substantial upfront capital, make pivoting practically impossible.

The Plug-and-Play Autonomous Store

That’s where Cloudpick’s off-the-shelf model comes in. Cloudpick provisions a complete, pre-integrated hardware and software package that includes everything from the shelving infrastructure and refrigeration units to the cameras and edge AI systems. It operates as a plug-and-play solution that retailers can customize with their branding and product assortment. Everything is standardized and pre-configured, keeping customers’ total costs predictable.

Customers simply select their desired satellite store dimensions and Cloudpick handles the rest through an on-site installation team. Thanks to modular construction, a satellite store can be set up in less than eight hours and redeployed in a new location within half a day, according to Perry.

Moreover, the ability to rapidly disassemble and redeploy satellite stores reduces the risk of selecting a poor location. If a particular spot underperforms, Cloudpick can move the satellite store to another area, almost like relocating a food truck.

This unique flexibility allows retailers to experiment with locations in a low-risk manner while capitalizing on emerging customer micro-markets and high-traffic zones.

There’s a strategic benefit this format can bring to the traditional retailers. Not only for convenience store operators but large franchises like Walmart or Les Mousquetaires, that want to penetrate new markets and create brand awareness in urban areas.

The Cloudpick solution’s pre-configured format is built around on standardization, which allows both new market entrants and existing retailers to capture previously untouched locations. “An example of an existing convenience chain playing this game is Zabka in Poland, which has rapidly launched 60 Nano stores within a course of two years,” says Perry. The retailer aims to rapidly roll-out their stores in high-traffic urban locations. This additional density of stores within a small radius area creates more effective supply chain management.

AI Delivers an Enjoyable, Cost-Effective Consumer Experience

Once deployed, these satellite, micro-retail stores provide an AI-powered user experience. Customers can enter the store, scan a QR code, or swipe a card. While they shop, Cloudpick keeps track of the items they’ve picked up, using a combination of cameras and weight sensors in the shelves.

Perry explains that this multimodal sensing approach increases accuracy and can determine whether a customer picked up three candy bars or just one. Additionally, it gives retailers virtually unlimited flexibility in the stock they can carry, allowing shoppers to enjoy a broad selection of items that can be easily updated to keep up with their shopping preferences.

To check out, the customer simply walks out of the store—no cashier required, and no need to scan items. This is possible through Cloudpick’s AI system, which processes unified data to map product movements and ownership to specific customers, automatically checking out those individuals through an app as they exit.

With built-in mechanisms for coping with occlusions, crowd detection, and multi-camera syncing, Perry says Cloudpick’s satellite stores maintain a 98.5% accuracy rate for checkout recognition and billing despite the complexity of the autonomous shopping experience.

Maximizing ROI for a Satellite Store

By providing a cashier-less experience, retailers need only to hire staff to visit the store and resupply stock. These on-site visits can be optimized by a smart inventory management system that helps minimize product waste, overstock, and out-of-stock situations.

The computer vision and AI back end also analyze shopping patterns, demographic details like age and gender, and customer traffic flows. This provides retailers with data insights similar to online retail’s user analytics and remarketing capabilities—but in physical locations.

The platform is designed to bring the data-driven profiling and marketing precision of e-commerce into brick-and-mortar retail. “Retailers can integrate our APIs to optimize product assortments, layouts, pricing strategies, and promotions based on real-world shopper behaviors,” says Perry.

All of this is made possible by Intel technology. Perry explains that high-performance, power-efficient Intel® processors are key to running Cloudpick’s computer vision models for object recognition, customer tracking, and checkout automation. What’s more, tools like the Intel® Distribution of OpenVINO™ toolkit enable Cloudpick to constantly evolve its offerings.

The Future of AI-powered Satellite Stores

Between autonomous operations, data-driven inventory optimization, and minimal real estate footprint, Cloudpick’s satellite stores provide retailers with an affordable, future-proof roadmap to micro-retail. Future integrations could include interactive digital signage for personalized promotions and immersive product storytelling, Perry envisions.

Satellite stores are just the beginning of the AI transformation of how we buy in the real world in the retail industry.

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Edge AI: Addressing Industrial Cybersecurity Challenges

Cyber threats in the industrial sector are a growing problem—and there are no quick fixes.

Several factors contribute to this challenge. The rise of the Industrial Internet of Things (IIoT) has connected all kinds of manufacturing equipment, control systems, and sensors to the network for the first time—greatly expanding the attack surface available to malicious actors. In addition, operational technology (OT) assets often rely on proprietary data transfer protocols and unpatched legacy operating systems, making them harder to secure than standard IT systems. And, like businesses in almost every other sector, manufacturers face a shortage of skilled security personnel, making it difficult for their IT and cybersecurity teams to cope with the increasing volume of threats.

In this difficult landscape, manufacturers require innovative solutions to address their ongoing OT security issues—and the application of artificial intelligence (AI) shows promise. But AI-based solutions can be challenging in themselves to implement in industrial settings.

“To apply AI effectively to industrial cybersecurity, you need high-performance edge computing capabilities to manage the intensive inferencing workloads,” says Tiana Shao, Product Marketing at AEWIN Technologies, a networking and edge computing provider with a wide range of solutions for the industrial sector. “Industrial environments also have unusually demanding requirements for scalability, flexibility, and ruggedness.”

The good news for the sector is that companies like AEWIN have now begun to offer edge hardware appliances that make it far easier for system integrators (SIs) and manufacturers to deploy AI-enabled cybersecurity solutions in factories. Based on next-generation processors and advanced software technologies, these solutions help security teams wield AI more effectively in the fight against cyber threat actors.

Beyond Automation: AI in Industrial Cybersecurity

While AI is not a “magic bullet” for industrial cybersecurity, it does introduce a new element to cybersecurity solutions: the ability to learn.

“AI in cybersecurity goes beyond mere security automation, because over time it can develop an understanding of what constitutes ‘normal’ user behavior and network activity,” says Shao. “AI can be used to analyze massive data sets in order to identify trends, flag risks, and detect anomalous events more effectively.”

That unique capability offers security teams some significant advantages. It gives them a better chance of detecting certain kinds of malicious activity that a legacy approach might miss. Establishing a baseline of “normal” activity also makes it possible to reduce the number of time-consuming false positive alerts.

“To apply #AI effectively to industrial #cybersecurity, you need high-performance #edge computing capabilities to manage the intensive inferencing workloads.” – Tiana Shao, @IPC_aewin via @insightdottech

Perhaps most important, through the methodology of searching for threats by identifying deviations from expected behavior—rather than by relying solely on rule-based approaches that attempt to match system activity or files to known threats—AI-assisted security tools can help security teams detect new and emerging cyber threats with greater accuracy.

Industrial Cybersecurity: It Takes a Team

AEWIN’s experience with an OT system integrator in the United States is a good demonstration of this.

The SI wanted to offer manufacturers a better way to detect sophisticated cybercriminal activity and speed response times, but this was difficult to accomplish using traditional methodologies. Newer threats, especially those that work by abusing or mimicking legitimate system operations, were simply getting lost in the “noise” of routine system activity, and thus overlooked.

Working with AEWIN, the SI developed a security solution that leveraged AI to analyze system behavior and learn what constituted “normal” so that deviations could be spotted more easily. The SI also used AI to help orchestrate the response across multiple controls and integrate new threat intelligence dynamically to improve defenses.

The result was an enhanced cybersecurity solution that could learn from historical data, identify patterns of activity, and detect cyberattacks that were being missed by traditional tools—while also responding to threats more quickly and becoming even more effective over time.

AEWIN’s experience highlights the benefits of partnerships between cybersecurity specialists and hardware providers—a phenomenon mirrored by AEWIN’s own experience with Intel as a technology partner.

In developing its SCB-1942 edge hardware appliance, the company worked with Intel to develop a powerful, flexible computing platform capable of handling the rigorous demands of AI in industrial cybersecurity. The device was constructed atop Intel® Xeon® Scalable processors, which offer up to 64 CPU cores and increased PCIe lanes for greater expandability.

The underlying hardware is further augmented by Intel’s range of AI accelerators. This includes Intel® Advanced Matrix Extension (Intel® AMX), which improves deep-learning training and inferencing, and Intel® Advanced Vector Extensions 512 (Intel® AVX-512), a set of new instructions that help boost the performance of the machine learning workloads used for intelligent cyber threat detection.

“Our relationship with Intel gave us extensive technical support and early access to advanced processors, helping us bring a scalable, high-performance edge computing solution to market faster,” says Shao. “Intel processors deliver remarkable performance and can meet the demanding workloads required to use AI to analyze network traffic in real time, perform deep packet inspection, and apply security policies automatically.”

A Future Toward Secure Digital Transformation in Manufacturing

As more and more manufacturers embrace digital transformation, it is expected that there will be an increase in cyber threats in industry—and that cybercriminals will develop new attacks as well. Luckily, AI can help skilled security practitioners respond to evolving threats more quickly and effectively than ever before—while purpose-built hardware appliances can help security teams deploy their AI tools in manufacturing settings more easily.

“We believe that the use of AI in industrial cybersecurity is only going to increase in the coming years,” says Shao. “Our mission is to support our customers by providing reliable, scalable, cutting-edge systems for this fast-growing market.”

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Secure Access Service Edge Protects the Network Edge

Enterprises can’t protect their assets if they don’t know what or where they are. This problem is becoming even more pressing with the growing number of IoT devices. When devices connect to an enterprise’s network, it is hard to tell the good ones from the bad and quickly sort out authorized users from intruders.

Fortunately, companies increasingly understand the importance of the network edge. Realizing that the point of entry can be varied—from an industrial IoT-based sensor to an employee’s mobile phone—they sift through points of contact to classify and fingerprint all devices trying to gain access.

Devices need to be identified and classified by type, risk, and sanctioned versus unsanctioned. “Sanctioned devices must pass through risk and security posture assessments,” says Dogu Narin, Vice President of Versa, a leading Secure Access Service Edge (SASE) provider. “Such a slice-and-dice methodology of granting access simplifies security while also keeping it agile.”

Unified Platform for Secure Access Service Edge (SASE)

“The SASE framework for data security accounts for the way we work today, especially with the growth of SaaS programs resulting in the “cloudification” of everything,” Narin says. “Whether you’re working from home, the office, or traveling, you should be able to use the networking and security functions in a constant way and as a service, which is the primary driver for SASE.”

Too often, checking for security robustness involves a piecemeal approach with separate operating systems for SD-WAN products, firewalls, switches, routers, and more. In many cases, these functionalities are separate and work in isolation. “It’s like needing to speak multiple languages. If one moment you need to speak English, another moment German, French, Spanish…it can get pretty complicated,” Narin says.

Worse, a lack of industry standards for device classification makes the problem even more challenging. A firewall device might label something as a social media application, whereas an SD-WAN device might find it to be something else. Such complications mean security protocols must be repeated over and over again, leading to bottlenecks in network traffic.

The Versa Universal SASE Platform stands on the SASE framework and consolidates multiple security and networking capabilities like fingerprinting, classification, risk assessment, and security posture assessment into a single solution.

Because the Versa SASE solution natively supports all protocols, it provides key advantages, among other things, single-pass packet processing for decreased latency and complexity. “With the Versa OS, all the protocols and device policies are baked in and popular IoT protocols are recognized,” Narin says.

The network administrator can focus on setting and applying policies to devices instead of having to start from scratch in identifying every entry point into the network. And administrators can carry over the Versa software to different environments. “You can deploy across the network and use only one language, one classification method, one policy engine, and one management console to achieve what you want to achieve,” Narin says.

AI in the SASE Framework

The glut of data flowing into enterprise systems makes infosec especially suited to AI. Versa uses AI to isolate sophisticated day zero malware attacks, where threat actors take advantage of vulnerabilities before developers have had a chance to identify and address them. Its malware analysis and detection mechanisms scan for data leakage to ensure that sensitive data does not get routed to the cloud.

AI is also useful for User and Entity Behavior Analytics (UEBA), which develops a baseline for an individual’s or application’s data usage to find behavioral anomalies. When IoT devices come into play, threat actors can masquerade themselves by taking on different identities or have unauthorized IoT sensors talking to one another. “AI helps us find these base patterns in mountains of data,” Narin says.

“You can deploy across the #network and use only one language, one classification method, one policy engine, and one #management console to achieve what you want to achieve” — Dogu Narin, @versanetworks via @insightdottech

Underlying Tech and Partnership

Versa uses processors and hardware offload engines from leading chip vendors. Its software is based on Intel’s open source DPDK (data plane development kit) for optimization of data packet processing.

“DPDK technology uses different low-level and pattern-matching libraries and other software functions to accelerate processing of security and packet forwarding to extract maximum processing power and achieve lowest latency on a given hardware platform, like a branch appliance or data center device. It enables us, to onboard and offer new appliances in a fast way without per appliance custom software development,” Narin says. “And we also use Intel’s high-level software libraries for a variety of different reasons including regex or other pattern matching purposes. It’s a broad scope of partnership and leverage between the two companies.”

Versa leverages the “force multiplier” effect that service providers deliver to scale their base of customers. A good partner network with companies that understand the sophisticated technologies that Versa delivers has been a key go-to-market strategy.

The Evolution of Data Security

As adoption of the cloud increases, and with the growing use of proprietary generative-AI models, Narin expects data sovereignty to play a greater role in data security.

“You’re going to see wider use of AI-based solutions, whether it’s in the detection of problems, analyzing large data, or how we apply tools and systems,” Narin says.

Operating and deploying networks are becoming more complex, and hackers also use AI to increase the sophistication of their attacks. In turn, the infosec community will respond by developing more complex mechanisms to detect and eliminate AI-originated attacks.

The future is about improving the customer experience, which demands a solution that interconnects applications and data through a “traffic engineered cloud fabric” for seamless quality without congestion. Such a fabric runs across the globe and connects SASE gateways to sites and users and cloud-based applications. It’s the best of both worlds: SASE-based security and a stellar user experience.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

AI in Radiology Transforms Cancer Diagnostics

Radiologists spend an inordinate amount of time looking at scans to diagnose conditions, but pioneering solutions are ushering in a new frontier in cancer diagnostic imaging. AI in radiology is emerging at a particularly critical time as health systems face a shortage of radiologists, leading to higher workloads that increase the risk of errors.

The growing volume of scans that doctors must interpret only compounds these challenges. One study demonstrates the risks this shortfall poses as scans grow in volume: When doctors have 50% less time to read radiology exams, the error rate goes up 16%.

Siemens Healthineers, a leading innovator in the healthcare tech industry, developed its AI-Rad Companion platform to increase diagnostic accuracy and reduce operational burdens for radiologists. The solution demonstrates the impact AI can have across the healthcare continuum, and how this transformative technology can serve as a second set of eyes and ears for doctors to support better healthcare outcomes.

The company uses AI-powered, cloud-based augmented workflows to optimize repetitive tasks for radiologists. AI-Rad Companion leverages deep-learning algorithms to deliver insights that support clinical decision-making—acting as an assistant to help radiologists make a more accurate diagnosis.

Harnessing the Full Power of AI in Radiology

Though it will take time to address workforce shortages in radiology, AI can help close this gap, says Ivo Driesser, global marketing manager for artificial intelligence at Siemens Healthineers.

“That’s why we said at Siemens Healthineers, ‘Why don’t we start using AI to take away the burden for radiologists of repetitive tasks like measurements of lesions, the time-consuming process of looking for lesions in the lung for cancer or measuring the amount of calcification in the heart. All these manual steps that doctors are doing can more easily be done by AI,” Driesser says.

AI-Rad Companion is designed to balance #automation and accuracy for doctors, while offering powerful decision support. @SiemensHealth via @insightdottech

AI-Rad Companion is designed to balance automation and accuracy for doctors, while offering powerful decision support. The solution isn’t at all obstructive. AI-Rad Companion seamlessly integrates into radiologists’ standard workflow, connecting to a hospital’s existing system virtually via the cloud or physically using an edge device. The solution—powered by Intel® Core processors and the Intel® OpenVINO toolkit—deploys deep-learning models that improve image recognition and processes anonymized DICOM data from CT devices. It then uses AI-driven algorithms to surface clinical insights for radiologists. AI-Rad Companion highlights lesions on medical images, streamlines the measurement of lesions to save doctors time, and in some cases, helps radiologists uncover secondary conditions or pathologies the naked eye may have missed.

“We cannot say, ‘This patient has lung cancer and needs that treatment.’ It’s always a doctor who needs to do this, but we can guide the eyes of the radiologist,” Driesser says. 

Modernizing Diagnostic Imaging Delivers Better Outcomes

AI-Rad Companion has five powerful extensions that involve interpreting images from chest CTs, chest X-rays and brain scans, aiding prostate assessments, and organ contouring for radiation therapy planning.

With the heart and large vessels, for example, AI-Rad Companion Chest CT can help doctors measure the diameter of the aorta. Using clinical guidelines, the tool then can alert doctors if there’s an abnormality on the scan that warrants further investigation. For chest CTs, AI-Rad Companion examines lung lesions and delivers AI-enhanced results next to standard CT data to help doctors diagnose conditions such as emphysema and lung cancer.

Some healthcare providers use AI-Rad Companion to increase their efficiency and diagnostic accuracy. Diagnostikum Linz, a radiology and imaging clinic in Austria, has leveraged the solution for chest CTs. AI-Rad Companion Chest CT is embedded within the image value chain. It applies deep-learning algorithms to DICOM data to calculate results that are then pushed to the radiologist’s reading environment for interpretation. The solution also has specific deep-learning algorithms that healthcare institutions can use for aorta assessments, so patients who need to undergo both heart and chest examinations can do so at one time.

AI-Rad Companion offers powerful 3D images and visualizations to advance the diagnostic process and reduce manual work for radiologists. With the solution’s AI-enhanced workflows, radiologists at Diagnostikum Linz have increased their efficiency by 50%, since it now takes fewer mouse clicks to access and interpret scans. They no longer have to manually measure lesions. The AI-enabled method used to calculate the diameter of lesions is the same every time, which not only saves time but also facilitates standardization that drives greater accuracy.

The Medical University of South Carolina (MUSC) has also used AI-Rad Companion Chest CT to reduce interpretation times for scans by 22%. MUSC has increased provider efficiency thanks to the solution’s AI-enhanced, post-processing, automated quantification of structures in the chest, and automated segmentation of the heart and coronary arteries. Having AI at the fingertips of radiologists allows for faster outcomes.

The Future of AI in Radiology

Radiologists are dedicated to giving patients the answers they need. Their work informs subsequent treatment, potentially enabling health systems to save more lives and deliver better outcomes. They currently grapple with manual processes that slow down interpretation times, but AI can help them optimize their workflow without compromising accuracy.

AI-Rad Companion demonstrates how AI can be a powerful enabler for healthcare providers, serving as an attuned clinical assistant rather than the final decision-maker in the diagnostic imaging process. In this way, AI-Rad Companion allows radiologists to focus less on tedious tasks, and instead use their deep clinical knowledge to drive impact where it matters most—delivering the best possible patient care.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Intel Insider: AI, New Products, and the Industry Ecosystem

Every industry embraces digital transformation to boost efficiency, productivity, and enhance customer experiences. This was evident at embedded world 2024 where Dan Rodriguez, Corporate Vice President & GM of Network and Edge Solutions Group at Intel, talked about how AI is emerging as a crucial element in this transformation and the vital role of Intel’s global ecosystem. Below is an excerpt from the discussion with Dan.

Let’s start with the momentum you’re seeing with AI and the opportunities it brings to enterprises.

As we engage with our customers to solve their business problems, AI is in nearly every conversation. And for good reason. AI has emerged as a top workload at the edge and is one of the industry’s biggest opportunities. The center of gravity for AI is rapidly shifting from training in large data centers and clouds to inference in the real world—in retail stores, manufacturing environments, and healthcare settings, for example.

We’re also starting to see momentum for Gen AI-based hybrid deployments for things like fast-food ordering kiosks, employee training, and medical use cases.

What are the biggest challenges our industry faces when it comes to developing and deploying AI-powered solutions?

First is the unique nature of AI at the edge. It needs to run with existing software on heterogeneous hardware, often in low-power, price-sensitive, and space-constrained environments.

Development complexity is also a challenge. There is a massive amount of data, generated both by machines and people, and a multitude of nodes, operating systems, operating environments, and protocols.

In addition, there is operational complexity in deployments that needs to be considered. A “rip and replace” approach is not always an option since there is legacy equipment and infrastructure. At the edge, AI is rarely a greenfield deployment, but more often it’s about adding AI to existing workloads and use cases within an overall system. It’s typically not feasible to add racks of GPUs in a factory or retail store. Instead, AI needs to be embedded in the workflow, not as an adjacent capability.

To address these challenges, the industry needs the collective experience and technical know-how of our entire ecosystem.

How do Intel technology and products help your partners and end customers overcome these challenges?

With the Intel’s edge footprint, our focus is to drive AI everywhere—any application and any use case—in a way that our customers can maintain some of the same platforms and software they already use for a much better TCO.

We remain committed to delivering a strong portfolio of technologies that partners can build on, and reference solutions that can get them to market faster. Intel technologies are broadly deployed across the network and edge today and we continue to expand our portfolio.

When you step back and think about the applications in edge, beyond just AI, they require other processing types beyond inferencing—including media functionality. So we have designed our products to excel in a balanced AI pipeline, which results in optimized end-to-end use case performance.

At embedded world 2024 we launched many new products that are well positioned to handle complex requirements at the edge, where customers are doing a diverse set of workloads—often in the same pipeline—to get the business outcomes they desire. This includes Intel’s new series of edge-optimized Intel® Core Ultra, Intel® Core, Intel Atom® processors, and discrete Intel® Arc GPUs for the edge. These products are designed to power AI-enabled edge devices in a variety of industries like retail, healthcare, industrial, automotive, defense, and aerospace.

Beyond delivering great technology, it’s also really important that we offer ways to help all of us innovate faster and develop faster. We also announced the Intel® Tiber Edge Platform this year, which is a software platform that makes it easy for enterprises to develop, deploy, run, and manage edge infrastructure and the applications that run on it.

Now pulling this together into industry-specific solutions is really key for our customers. We are building on top of Intel Tiber Edge Platform and launching AI suites for various industries, to help our customers more quickly design, develop, and scale AI capabilities into their solutions at the edge. The AI suites include partner hardware and reference implementations, software toolkits, application frameworks, and this is all in support of our market-ready, open ecosystem. We’ll offer the AI suites initially for manufacturing, media and entertainment, life sciences, and visual analytics for government, focused on safety and security use cases, and there are more to come. In addition to these AI suites, we also work with our ecosystem to offer Intel® IoT Market Ready Solutions to drive solution scale. We’ve introduced over 500 solutions over the past five years, many of which incorporate AI capabilities.

Why is the Intel partner ecosystem so essential to advancing AI solutions, and do you support this ecosystem?

There is so much that our ecosystem can do together on the edge and to drive AI everywhere. This builds on a journey that we have been on together for the past couple of decades in the embedded space, and then with the emergence of IoT and digital transformation across many industries. This is also true in telecommunications where we collectively led the transition to NFV (network functions virtualization) and are supporting 5G rollouts. Our broad, open ecosystem has laid the foundation for the transition to software-defined infrastructure across the network and edge, which has brought more agility and efficiency to our customers, and now we are accelerating AI innovations.

Through the Intel® Partner Alliance, we have been dedicated to supporting partners through advanced training tools, networking opportunities, and marketing resources. We recently announced exciting updates to help both our partners and end users more quickly develop and implement AI solutions from edge to cloud. This includes the expansion of the Intel Partner Alliance AI Accelerator initiative, increasing Intel’s investment to support AI partners that are building Generative AI as well as AI PC applications. This expansion adds over a thousand edge AI applications developed by our ISV partners to the already existing AI Accelerator initiative.

(Read more about the Intel Partner Alliance Accelerators in the LinkedIn blog written by Trevor Vickers, VP, GM Global Partners & Support at Intel.)

I’m also really excited about the Industry Solution Builders initiative, a new ecosystem initiative as part of Intel Partner Alliance, with the goal of innovating and transforming industries. This initiative brings a new focus to orchestrating industry transformation at scale with a global ecosystem. Through Industry Solution Builders, Intel will offer activation zones with benefits like technical training and participation in industry standards bodies and coalitions to support end users as they transform their infrastructure. We have these zones today for the Manufacturing, Telecommunications, and Government sector, which includes smart city use cases, and soon we will be offering one for Energy, and you can expect more in 2024.

With all of this, we’re excited about the opportunities in front of us, and how we can collaborate and innovate with the ecosystem to drive transformation and enable our customers to achieve better business outcomes.

 

Edited by Christina Cardoza, Associate Editorial Director for insight.tech.

AI Analytics Increase the Value of Video Cameras

It’s not your imagination: From airports and city streets to shopping centers, arenas, and museums, CCTV cameras are ubiquitous. To maintain safety and security, municipalities and businesses are investing more on video equipment every year.

But some organizations are starting to question how much those extra dollars are doing for them. Certainly, the cameras are invaluable for capturing information—whether it’s a highway traffic jam or a jar of spaghetti sauce spilling across a grocery floor, their all-seeing eyes never miss a problem.

That’s not always true for staff monitoring the video feeds, though. Studies have shown their detection ability decreases by 15% after just 30 minutes. After that, reaction time slows and errors increase. Adding more cameras compounds the problem.

Today’s AI analytics software can close these gaps. AI-powered video instantly makes sense of incoming video feeds, sending alerts in near real time to stop problems before they get out of hand.

And analytics doesn’t just improve security, it adds business value. AI algorithms can observe customer behavior, revealing which promotions work and which don’t, which experiences capture people’s attention the best, and where bottlenecks cause frustration. These insights can help marketers, retailers, and facilities managers improve their service and draw more customers, ensuring that their investments in video technology are worthwhile.

Today’s #AI analytics #software can close these gaps. AI-powered #video instantly makes sense of incoming video feeds, sending alerts in near real time to stop problems before they get out of hand. @AerVision via @insightdottech

Improving Safety and Efficiency

Desire for greater video capabilities is widespread, says Abbas Bigdeli, CEO of video analytics company AerVision Technologies. “We’re definitely seeing the trend of organizations wanting to get more out of their video infrastructure. They want more precise data, better security, and better productivity.”

To improve incident detection and increase efficiency, AerVision developed AerWatch, an AI video analytics solution that companies can customize to recognize and respond to specific types of risks—and opportunities.

For example, at large retail or grocery stores, AerWatch can recognize personal belongings or merchandise a customer has left behind in a shopping cart, directing staff to the lost items so they can set them aside for easy pickup. The system can also send alerts about hazards that could cause a slip-and-fall accident.

At museums and theme parks, AerWatch can detect a lost, distressed child and inform a manager of their current location and the point where they likely became separated from parents. After business hours, algorithms can alert security guards if someone attempts unauthorized entry or starts to spray graffiti on a wall.

In some cases, timely intervention may save lives. For example, in busy public venues, AerWatch has been used to alert personnel if a person is loitering, repeatedly pacing back and forth, or trying to climb over a security rail—behaviors which may signal an intention to cause self-harm. First responders are trained to discourage people from impulsive behavior and obtain the help they need. And this is much easier with the help of AerWatch, which sends an initial alarm to bring attention in these kinds of scenarios.

Gaining Customer Insights from AI Analytics

In addition to improving safety, organizations use video analytics to understand their customers better. For example, a museum in Australia that has 400 video cameras uses AerWatch both for security and for counting visitors. Analytics measure how long visitors spend at each exhibit, a proxy for engagement. Reviewing this information helps staff plan content that sparks audience interest.

Algorithms also count the number of visitors using wheelchairs or pushing children in strollers. “If the museum wants to make access more accommodating, they will have data to back up that decision,” Bigdeli says.

Airports, stores, hotels, and banks also use AerWatch to see where service improvements are needed. AI can measure how long people have to wait for an airline ticket, an elevator, a clerk, or an ATM machine.

Shopping centers like to track visitors during special events or promotions. Some hire consultants to gauge success, but they can’t always provide a comprehensive picture. “With AI analytics, you get much more granular data at a fraction of the cost,” Bigdeli says.

Building Effective Algorithms

For its analytics solutions, AerVision creates pre-trained machine learning models using the Intel® OpenVINO toolkit to streamline edge AI development. The pre-trained models are sufficient for some customers, Bigdeli says. For others who want more fine-tuning, AerVision works with them using OpenVINO to build custom solutions.

All solutions use Intel® processor-based hardware, which handles heavy video loads quickly and efficiently, while also enhancing data protection and privacy. AerVision software does not retain personally identifiable information and complies with all regulations in customers’ jurisdictions, Bigdeli says. In addition, the company provides tools to help companies apply their own privacy and access policies.

Expanding the Potential of Video Analytics

While AerWatch is its primary product, AerVision has developed solutions for other use cases, including AerMeal, which measures caloric intake for hospital and care-home patients at risk for malnutrition. Sports teams can also use it to ensure that athletes consume recommended amounts of protein.

With an eye toward the future, AerVision is experimenting with generative AI. One potential solution sorts through vast volumes of video data to create customized reports for different teams. “The report that goes to the security director might be different from the report that goes to the marketing director or the facility manager,” Bigdeli says.

Another generative AI project aims to speed model training. For example, an airline that wants to make sure a food cart gets loaded onto the plane may try to obtain images of the cart from existing video footage. But that’s not as easy as it may sound, Bigdeli says. Hunting for specific images is very time consuming, and the airline may find only one or two views of the cart. Generative AI can conjecture views from other angles, allowing an AI model to learn faster and achieve more accurate results.

Solutions like these are just scratching the surface of AI’s possibilities for video. “As processing power increases, more people are starting to take advantage of edge AI solutions,” Bigdeli says. “Whether companies want to improve energy management, optimize their use of space, or provide better customer service, they can find ways to improve efficiency and productivity with video analytics.”

This article was edited by Georganne Benesch, Editorial Director for insight.tech.