Bolster Perimeter Protection with Video Analytics

A sheep straying into a guarded facility could just be an animal that is lost or separated from its herd. But there have been very real cases of people dressed like sheep walking into critical infrastructure facilities and stealing equipment.

A human can easily tell the actual animal from a fake. But a tired human can slip up. While critical infrastructure facilities might be fully equipped with video cameras, watching hours upon hours of footage can be mind-numbing. In such instances, humans can make expensive mistakes.

It’s why perimeter protection using video analytics and a network of cameras is a job that’s ripe for automation. “Using computer vision and applying analytics makes a lot of sense because machines can analyze video 24 hours a day without getting bored or losing attention, especially in areas where nothing happens most of the time,” says Eduardo Cermeño, CEO of Vaelsys, a company that specializes in AI vision solutions.

Security Automation for Intruder Detection

Vaelsys offers Deepwall, a perimeter protection solution that provides intruder detection, which is more advanced than simply identifying and detecting humans. “We can detect people that are in places where they should not be and we’re very accurate at doing so. We can detect not only walking but also running and crawling,” Cermeño says.

And yes, Deepwall can deal with humans disguised as sheep. “It’s not just about humans or human behavior, it’s about the behavior of suspicious elements. When human intelligence tries to trick artificial intelligence, you need something beyond person recognition, we detect people that don’t want to be detected. We analyze behavior, we analyze how elements are moving, how critical an area is, a lot of information goes into our algorithm,” Cermeño says. Sometimes, an umbrella moves around in strange ways and in unexpected places, and this suspicious activity Deepwall detects.

The Deepwall algorithm is a potent combination of deep neural networks integrated with computer vision and applied to a network of cameras. The cameras can be standard definition (SD), high-definition (HD), or thermal. The thermal imaging equivalent of the solution is called Deepwall Thermai. The kind of camera used depends on the distance that needs patrolling. “If you’re talking about perimeter protection, the farther out you’re able to detect, the better,” Cermeño says. Vision cameras can perform up to 80 meters before losing accuracy while thermal equivalents can cover several hundred meters.

The combination of #VideoAnalytics and camera imaging is particularly attractive in #remote and expansive locations. @Vaelsys via @insightdottech

Perimeter Protection for Widespread Operations

The combination of video analytics and camera imaging is particularly attractive in remote and expansive locations. For example, solar farm operators face theft of resources like copper, a common component of photovoltaic panels.

When such incidents happen, it’s not just the loss of copper that’s a problem but also the downtime during which the farm does not generate electricity. “To prevent such incidents, you place cameras on the perimeter, connect those cameras to our solution. And the Deepwall system will analyze the video feed. When it detects an intruder, it’s going to generate an alarm and call the monitoring station to take action,” Cermeño says.

High Performance and Low-Power Computing

Vaelsys works with an extensive range of Intel® Core Processors to accommodate banks of cameras. One of the advantages of Intel CPUs is that they deliver processing power without having to rely on energy-consuming GPUs. Being energy efficient saves money and helps companies achieve sustainability goals.

Plus, the Intel® OpenVINO toolkit helps companies like Vaelsys test-drive AI and computer vision solutions, Cermeño says.

Use Cases for Video Analytics

Beyond perimeter protection, Cermeño sees video analytics as a powerful security solution, able to detect people or vehicles in restricted areas, but also as a perfect support tool for safety supervision. Computer vision can be helpful to detect someone who has fallen or a person not wearing the proper safety gear.

For its part, Vaelsys hopes that the robust shell it has created with the Deepwall solution—metadata generation for video and video optimizations for the Intel platform—can readily transfer to a variety of computer vision applications.

Instead of recognizing intruders, for example, solutions could detect special ambulances. Companies interested in a specific object could piggyback on the Vaelsys computer vision platform V4 and plug in the recognition engine for the particular use case. Then “you’ve got a complete solution that’s going to be able to work with any camera on the market and that’s easy to integrate with any software,” Cermeño says. The process can work with Vaelsys vision analytics developed in-house or other third-party implementations.

The packaged solution Vaelsys delivers stems from a market need to turn a proprietary AI model into a viable implementation. After all, simply having a model is not enough; you need to use it with a web interface and integrate with a bank of CCTV cameras, Cermeño says.

Such a plug-and-play approach to computer vision and object detection dramatically reduces the cost of product development. And that’s a good thing, whether video analytics technology ensures worker safety by detecting helmets or protects perimeters by detecting fake sheep.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Empowering Sustainable Smart Cities with Edge AI

Today’s cities face ongoing issues related to pollution, waste management, and energy consumption. But as smart technology and solutions get more integrated into urban environments, efforts are being made to promote sustainability. Axiomtek, for example, uses artificial intelligence and edge computing in recycle and waste bins to enhance efficiency and accuracy.

In this podcast, we explore sustainable smart city efforts, examine real-world use cases, and discuss the necessary infrastructure to achieve sustainability goals.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guest: Axiomtek

Our guests this episode are Jody Cheng, Product Solution Manager, and Manny Hicaro, Application Engineer Supervisor, at Axiomtek, an industrial PC field expert. At the company, Jody is responsible for IoT and edge AI solutions while Manny’s work revolves around enhancing the hardware and software performance for clients.

Podcast Topics

Jody and Manny answer our questions about:

  • 3:16 – Driving factors of sustainable smart cities
  • 5:16 – Latest innovations for sustainable smart cities
  • 9:15 – How edge AI contributes to sustainable solutions
  • 12:46 – Necessary smart city infrastructure
  • 14:57 – Examples of AI-powered sustainability efforts
  • 17:56 – The value of Intel and its technologies
  • 19:36 – Future edge AI smart city implementations
  • 21:49 – Final thoughts and key takeaways

Related Content

To learn more about the latest sustainable smart city efforts, check out our smart cities page.  For the latest innovations from Axiomtek, follow them on Twitter/X at @Axiomtek and on LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to “insight.tech Talk,” formerly known as “IoT Chat,” but with the same high-quality conversations around the Internet of Things, technology trends, and the latest innovations you’ve come to know and love. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re going to be talking about smart cities and how to make them more sustainable, with industry leaders from Axiomtek. So as always, before we get started, let’s get to know our guests. We have Manny and Jody from Axiomtek. Manny, what can you tell us about yourself: what you do at the company, and the company itself?

Manny Hicaro: Hi, my name is Manny Hicaro, and I’m an Application Engineer Supervisor at Axiomtek. We’re a computer-manufacturing company and a leader in industrial computers and embedded systems. Axiomtek provides customizable, robust solutions for smart cities, manufacturing, and other critical industrial applications.

My role primarily at Axiomtek revolves around enhancing the performance of hardware and software systems for our clients. I work closely with various teams—from sales to product managers and technical support—to ensure that our customizable solutions are perfectly tailored to meet the specific needs of our customers. This involves everything from conducting benchmarks and compatibility tests to working with our customers directly to meet their project goals and requirements.

Our goal at Axiomtek is not just to provide cutting-edge technology, but it’s also to ensure that these technologies are accessible and effective, enabling our clients to achieve their business and sustainability goals more efficiently.

Jody Cheng: Hi. My name is Jody Cheng and I’m the Product Solution Manager at Axiomtek. I’m responsible for IoT and AI edge solutions, and I’ve been with the company for a little bit over seven years now. Axiomtek has been a key player in industrial computer manufacturing for over 30 years, with our expertise in technology and experience serving industrial customers in the markets like automation, energy, medical, transportation, gaming, and retail.

We gained our market know-how, and we’re now ready to bring our values even closer to the application level. We hope through working with our customers and our eco partners we could bring more valuable solutions to the market, help customers solve the problems, and make the changes with all the emerging and exciting technologies.

Christina Cardoza: Great stuff. I’m looking forward to digging into some of these things. Like you said, Jody, you’re looking to help customers and businesses improve their operations or meet some of the challenges that they’re facing. Sustainability has been a major trend among different organizations and different industries. At insight.tech, we’ve been writing about how manufacturing companies, medical companies—how they can all become more sustainable in their operations.

So, smart cities are one of the big factors in being able to make sustainability efforts happen. We focus a lot on inside the factories or inside the business, but there’s a lot outside in the real world that we can be doing to make our efforts more sustainable. I’m curious—want to start the conversation—why or how is sustainability an impact or a goal of smart cities? Why has it become a major trend in these areas?

Jody Cheng: So, there are a few different factors that have been driving this trend. First off, more and more people are becoming aware of climate change and its effects. It has really pushed the cities to be more proactive about reducing their carbon footprints. Since urban areas can contribute a lot to greenhouse gas emissions and with global population become increasingly urbanized, the pressure is on the cities to tackle these environmental issues.

So, basically, sustainability is the goal. It’s about keeping economic growth on track while minimizing environmental impact. And, I share this information—according to a study done by the World Bank, 56% of the world population lives in the cities, which is about 4.4 billion people. So, by 2050 it’s expected that seven out of ten people will be city dwellers.

So that’s why cities that focus on adopting practices to combat climate change, improve air quality will boost the overall quality of life for their residents. At Axiomtek, we’re really excited to see this trend towards sustainability growing. It’s all about making sure that our environment stays healthy and thrives in the long run.

Christina Cardoza: Absolutely. And when you consider the amount of people you mentioned living in cities, it becomes that much more of a priority. I just had a conversation around buildings, making buildings smart—they give off a lot of the pollution, about 40% of carbon emissions—and how we can make those more sustainable.

But when it comes to a city, it’s all these different things and all these different people that are stakeholders or really can make a change in making it more sustainable in addition to the buildings. I’m curious, what are some solutions or technologies that can help smart cities become more sustainable?

Jody Cheng: There are a lot of solutions that we can talk about here. Sustainable, smart cities are all about planning technology with environmental responsibility for a thriving urban future. It’s about creating cities that work smarter, not just harder, for both people and the planet.

This is achieved by using smart grid, IoT, data collection—along with innovations in buildings, transportation, and maybe resource management. Smart cities rely on Internet of Things—think of a network of sensors around the city tracking air quality, water usage, and traffic. This data helps city managers optimize resources and reduce waste.

For example, smart grids can balance energy demands, cutting down on fossil fuels. Let’s say traffic lights can adjust based on real-time traffic, easing congestion and lowering emissions. And that’s just the start. The possibilities are endless. Edge computing and AI make things even better by processing data right where it’s collected. This means, like, quicker decisions and more efficient operations, making smart cities even smarter.

Let’s talk about smart grid first. Smart grids are the foundation for sustainable, smart cities, with advanced electric grid using monitoring tools and efficiently managing electricity use. So, this will help integrate renewable resources, like solar, and reduce the reliance on the usage of fossil fuels.

Another big driver of sustainability is smart buildings and homes. You talked about that earlier. These places have energy-efficient systems like automated lighting that adjust based on the occupancy, or the HVAC systems that optimize heating and cooling. Plus, the LEED (Leadership in Energy and Environmental Design)-certified buildings which follow strict sustainability standards are becoming more common.

Another topic we can talk about is transportation. Transportation is also another key focus for cities aiming to reduce emissions. Cities are developing EV-charging infrastructure and promoting public transit systems with passenger tracking and traffic optimization. These innovations help cut emissions, ease congestion, and offer eco-friendly travel options for residents.

So, last, I like to mention resource management. Resource management can be one of the most crucial aspects of sustainability. Managing resources like water and waste is key to sustainability. Automated waste collection and energy-to-waste conversion help reduce landfill use and promote recycling. Also for water: smart integration and advanced treatment processes optimize usage and cut down waste. These are some of the solutions or areas I personally see that will contribute to our society’s sustainability going forward.

Christina Cardoza: I love all those examples. Like I mentioned, we talked about buildings, and that’s just one area, but when you look at it from a smart city perspective, it really is an end-to-end solution that you guys are providing to really make these efforts done in conjunction of one another and not these improvements and enhancements happening in silos.

One thing you mentioned was, of course, AI and edge computing. That’s going to be extremely important, especially to all this data collection and all the benchmarking that you were talking about in the beginning, Manny. I’m curious, obviously AI—you’re getting all of this data, and AI is going to be important to be making sense, to be making these predictions, to be measuring it up against these benchmarks. And edge computing is making sure that the data is happening in real time so that we can make quick, informed decisions. From your perspective, what is the role of AI and edge computing contributing to the sustainable solutions beyond some of the benefits that I just mentioned?

Jody Cheng: The advancement of AI and edge computing has elevated these sustainable solutions to a level that we haven’t seen before, with processes becoming more efficient, faster, and autonomous. Both AI and edge computing are transformative technologies that, when they’re combined, significantly enhance the sustainability of the smart cities. So maybe we can break this down and discuss each part of the edge computing and AI, and how they combine into this great form of computing.

So, in general terms, AI has the ability to analyze vast amounts of data collected from various sensors and can build off established algorithms. Through these advanced algorithms and data analysis, AI holds the ability to enable energy-efficiency improvements in all city environments, from buildings and factories to transportation systems and more. This can all happen simultaneously while managing renewable energy integration into the grid.

On the other hand, edge computing is a distributed system that brings computation-data storage closer to where it’s generated, improving response times and saving bandwidth. So instead of relying solely on centralized data processing—like we do in cloud computing—edge computing process data locally onto devices such as IoT sensors, routers, or gateways. This proximity to the data source allows for quick decision-making and reduces the need for data transmission to centralized servers—like enhancing the responsiveness within the system while also cutting down on energy usage, consumption, and resource associated with transferring large amount of data.

So, when combined, edge computing with AI brings together real-time data analysis and decision-making at the network’s edge, minimizing latency and bandwidth constraints. This decentralized approach enhances system responsiveness, reduces network congestion, and cuts costs. For city operations this can optimize everything from traffic flow to energy-consumption patterns, reducing energy waste and increasing overall efficiency.

Smart cities, like AI-driven insights, support proactive and adaptive urban planning. Real-time data on metrics like air quality, traffic congestion, or waste management and energy usage allows city managers to make informed decisions and optimize resource utilization and minimize the environmental impact. This synergy between AI and edge computing helps on traffic engineering and more sustainable urban initiatives, ultimately improving quality of life while addressing environmental challenges.

Christina Cardoza: Great. So, I keep coming back to this building example, just because it’s easier to visualize how you can make improvements in a building. You mentioned the lighting, the occupancy—there’s different infrastructure already involved in there. When we’re looking at an entire city and all of these different things you can be doing from the building, from the transportation, from all of these different areas, I’m curious what type of investment and infrastructure is necessary to make sustainable smart cities possible?

Manny Hicaro: Creating smart cities isn’t cheap. It does need a big investment in advanced hardware, and it also requires a strong network infrastructure. We’re talking about setting up powerful GPU-based edge computers that can handle a variety of processors. These machines are essential for real-time data processing and AI tasks.

We also need to integrate IoT sensors and cameras throughout the city. These devices are key for real-time monitoring and data collection. They gather information on things like traffic, public safety, and environmental conditions—and then process it locally to enable quick decision-making. On top of that, smart grids and intelligent transportation systems help optimize energy distribution and traffic flow, making the city more efficient overall.

But it’s not just about the devices. Having a solid network infrastructure is crucial too. This ensures secure and high-speed data transmission across the city systems. A network of edge data centers strategically placed around the city can boost the efficiency and reliability of this edge computing. These centers reduce latency and speed up the processing of real-time information, plus they provide redundancy, support the quick deployment of new applications, and improve disaster-recovery capabilities while contributing to sustainability goals.

However, implementing these technologies in existing urban infrastructures isn’t easy. It comes with a high initial cost and the challenge of ensuring compatibility with older legacy systems. This is where collaboration between the public and private sectors become crucial. These partnerships help align technology deployments and public policies, making sure that the solutions are sustainable and effective in the long run.

Christina Cardoza: And then in addition to these types of partnerships, I’m sure there’s technology partnerships going on behind the scenes as well. But before we get there, I want to talk about some use cases or customer examples that you guys may have. We’ve been talking a lot about the technology, the solutions, what can be done, what are some areas that we can be implementing this technology. But to really paint a picture for our listeners, do you have any customer examples or use cases that you can share with us that highlight the effectiveness of the technology we’ve been talking about?

Manny Hicaro: Absolutely. One of our standout projects that we were involved in involves an AI-enabled recycling bin that pretty much transformed waste management in several urban areas so far. These bins use advanced AI algorithms to efficiently sort through the recyclable materials, which significantly cuts down the frequency of waste collection. With the help of a high-performance processing system, these bins can quickly and accurately execute tasks, making recycling programs much more effective.

Here’s how it works. People can easily deposit recyclable waste, like plastic bottles and cans, into these bins. The system then autonomously sorts through those materials, ensuring that they are correctly categorized for optimal resource recovery. This boosts sorting, accuracy, and overall recycling rates. Plus, the system maximizes the physical storage space by automatically compressing these papers and bottles, which reduces the need for frequent cleanup visits, and improves the labor and efficiency in waste collection so far.

These AI-powered recycling bins also take a proactive approach to maintenance. They can notify cleaners through remote management when they need attention. The sensors continuously monitor the fill levels in real time and send notifications when the waste is nearly full. This pretty much streamlines operations and eliminates the need for constant manual monitoring. This results in improved accuracy, higher recycling rates, and greater operational efficiency.

Additionally, cities like Baltimore are making strides with similar initiatives. They’re investing $15 million to deploy 4,000 smart trash receptables across the city. These solar-powered, Wi-Fi-enabled trash cans can work much like our solution, transmitting information about the fill levels to optimize collection schedules and enhance overall efficiency.

Christina Cardoza: I love that example. Anytime we can automate and take manual tasks out of the equation, that makes it a lot more accurate. Recycling and being sustainable is a priority for me as a citizen. But when I come to a recycling can and there’s paper, plastic, metal, and there’s different pictures of what constitutes as waste and not waste, it gets confusing, and sometimes I’m worried that I put it in the wrong bin. So, it’s great that these recycling bins can do the work for you so that we can start making these efforts and be more sustainable, be more accurate, and not have to worry about relying on error-prone processes. So that’s awesome to hear.

I want to come back to, like I mentioned, this probably takes a lot of technology partnerships to do this. We mentioned powerful processors, edge computing. I should mention insight.tech and the podcast as a whole, we are sponsored by Intel. But we have had this theme going on, “better together,” that it really takes teamwork from different organizations, different partners, different experts to leverage all this technology and really create a powerful, impactful solution. I’m curious what the value of your partnership with Intel and its technology has been to enable some of the use cases you just talked about.

Manny Hicaro: Oh, yeah, definitely. Our partnership with Intel has been key to the success of our smart city solutions. Intel’s processors provide the high-performance computing power needed to handle these complex AI and data-processing tasks. And over time Intel has fine-tuned these processors to boost efficiency and performance, making sure that they meet the tough demands of city applications.

Here at Axiomtek, innovation is at the heart of our collaboration, and Intel’s dedication to advancing technologies aligns perfectly with our goals. This allows us to use cutting-edge technologies to develop robust and reliable solutions. Also, this partnership has helped us stay ahead in the AI and edge-computing advancements.

Another benefit of it is scalability. There’s a wide range of products that Intel lets us customize our solutions for various use cases, ensuring our systems can scale efficiently to meet the growing demands of urban areas. Whether it’s expanding the network of IoT sensors or adding more advanced AI capabilities, Intel’s technology supports the seamless scaling of our solutions. Overall, Intel’s technology and support have empowered us to develop advanced smart city solutions and enhanced our ability to implement them effectively. This partnership keeps our systems at the cutting edge of technology, and it also provides reliable and scalable solutions for cities around the world.

Christina Cardoza: Great. And of course, sustainability is only one aspect of smart cities. So, in the beginning, in your backgrounds, you both mentioned different areas that you are focused on at the company. I’m curious, how else do you see edge AI being implemented across smart cities and benefiting this area?

Manny Hicaro: We talked about how edge AI can help cities become more sustainable by integrating it into foundations of a city—like smart energy grid, resource management, transportation, and smart buildings. As edge AI technology continues to advance and become more effective, it can also expand into areas beyond sustainability. The possibilities are only limited by our imagination.

In transportation, while EV charging and promoting public transportation supports sustainability, edge AI can offer many other applications. For instance, real-time sensors can optimize public parking, and self-driving can continuously improve, driving new data. In the future autonomous vehicles might even coordinate with each other with traffic infrastructure to improve efficiency and safety. We can also see how automated public transportation could optimize routes and manage passenger flow. Edge AI can enhance traffic management by enabling real-time traffic optimization, taking into account multiple factors like nearby streets and intersections. With more cameras in place, it allows for immediate accident detection and traffic rerouting.

Moreover, AI-powered surveillance can boost public safety by detecting unusual activity and predicting incidents like floods or fires. Infrastructure monitoring can detect anomalies for prompt maintenance, including road conditions and utility lines. This kind of monitoring can even extend to resources like water and air quality. Edge AI’s integration into smart cities shows a lot of promise, and it can enhance daily life for virtual assistants that automate routines and interact with smart devices. These assistants with AI advancements can adapt routines based on user patterns, making life more convenient and efficient in the long run.

Christina Cardoza: Well, I can’t wait to see what else Axiomtek does in this space, and I invite all of our listeners to follow along with them, contact the company, visit their website to see how you can be a part of some of these innovations happening, as well as follow along on insight.tech as we cover partners like Axiomtek and others and what they’re doing in these spaces.

Before we go, I just want to throw it back to you guys if there’s any final thoughts or key takeaways that you want to leave our listeners with today.

Jody Cheng: Yeah. We shared various technology perspectives here, and now maybe we can shift our focus back to humanity. In the past, many people have traditionally viewed our economic and environmental concerns as conflicting interests. However, we anticipate that the introduction of edge AI will significantly alter this dynamic for urban cities.

In the past, maintaining sustainable operations often demanded a significant human resource for management and close oversight. We saw examples of this in recycling and resource-management operations, where many of them that incorporate edge AI now operate more efficiently and autonomously, reducing the need for human intervention.

With edge AI advancement, these activities will become more cost-effective, achievable, or more efficient. We’re really excited to contribute to the sustainable journey by offering improved methods to fight the greenhouse effect, reduce carbon emissions in the long term, and maybe leave a cleaner environment and brighter future for the next generation.

Christina Cardoza: Absolutely, and totally agree with you there. The roles and the responsibilities and our impact in this is going to significantly change, especially when you think about the use case and the recycling bin that you provided. It is no longer on the human or the manual effort to make some of this happen. It’s just going to become second nature, and we’re going to start being able to make these efforts even if we don’t know we’re making them.

It’s great to see companies like Axiomtek making all of these different innovations and advancements in this space. I want to thank you both again for joining the podcast, as well as our listeners for tuning in. Until next time, this has been “insight.tech Talk.”

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Enabling Industrial Automation at the Edge

Industrial applications are growing more sophisticated, thanks to recent advancements in artificial intelligence. “Since the rise of ChatGPT, we’ve seen more and more use cases reliant on generative AI—from autonomous robots and augmented reality to smart cameras,” explains Steven Shi, Senior Product Sales Manager of Boards at AAEON, a leading provider of intelligent IoT solutions.

But increased reliance on AI comes with its own set of challenges. For starters, AI-powered solutions deployed for real-time operations must perform across multiple systems with tight latencies that the cloud is not equipped to handle. As a result, more and more data processing is shifting to the edge.

But existing edge hardware can’t provide performance and throughput necessary for industrial AI automation operations—requiring organizations to adopt new edge hardware.

Anatomy of Industrial Edge-First Hardware Design

To meet the demands of today’s industrial applications, industrial edge systems must deliver exceptional performance in a small form factor, operate in harsh environments with low power consumption, and support Time-Sensitive Networking (TSN).

TSN is an industry standard that enables deterministic communication over Ethernet networks, enabling precise real-time coordination among far-flung systems. This is especially important in industrial automation environments where accurate timing is crucial.

To meet the demands of today’s industrial applications, industrial #edge systems must deliver exceptional performance in a small form factor, operate in harsh environments with low power consumption, and support TSN. @AAEON via @insightdottech

Thankfully, companies like AAEON that specialize in hardware for industrial automation have many years of experience delivering high-performance capabilities alongside efficiency and thermal robustness. AAEON’s COM-RAPC6 and NanoCOM-RAP computer-on-modules, for example, keep with this trend.

“Because we’re talking about the edge, sometimes space is also a challenge,” Shi explains. “That’s why AAEON put so much focus on compact designs like the COM Express Mini.”

Designed according to the COM Express standard, both systems feature a compact form factor with considerable power efficiency. NanoCOM-RAP also provide wide voltage input, allowing them to manage power fluctuations more effectively.

Each module also features the 13th Gen Intel® Core processor. Built to deliver energy-efficient, optimal performance for edge use cases, these processors feature flexible hybrid architecture with support for hardware-enabled AI acceleration, multitasking, and concurrent workloads.

AAEON has also designed both of its modules for rugged environments and tested them through a unique process known as Wide Temperature Assurance Service (WiTAS).

“As with some AAEON boards and modules, COM-RAPC6 and NanoCOM-RAP are WiTAS qualified,” says Shi. “We have put them through a very strict quality control process to guarantee they can operate in a temperature range from 40°C to 85°C.”

The COM-RAPC6 and NanoCOM-RAP offer 2.5 Gigabit Ethernet, enabling support for TSN. Both COM-RAPC6 and NanoCOM-RAP include a discrete TPM for additional security. Additionally, they are equipped with high-speed PCIe interfaces with support for PCIe expansion through a carrier board. This allows for scaling AI performance by accommodating add-ons like AI accelerator cards via PCIe interface on the carrier board.

AAEON also offers Q-Service, a technical service program in which it leverages its engineering expertise to help clients bring products to market much faster. This includes assisting with both design and debugging, as well as providing software support and BIOS customization. Last, the company provides a user-friendly interface for UI development and device monitoring in AAEON Hi-Safe.

Building a Smarter Industrial Edge

AAEON is already creating a new product line that will take advantage of the 14th Gen Intel® Core Ultra processors and provide many more benefits to embedded and industrial manufacturers. The 14th Gen processors deliver even better power efficiency than their predecessors and include both advanced GPU and an embedded Neural Processing Unit (NPU) for AI acceleration as well as support for high-speed WiFi 6E.

“Systems built around these processors will be able to better handle the environmental challenges and resource requirements at the edge. I strongly believe that this will open new opportunities and possibilities for what edge hardware can achieve,” says Shi.

According to David Huang, Product Manager at AAEON, the most significant feature of the new processors is the embedded NPU. “Moving forward, I think AI-enabled hardware will eventually be as ubiquitous as cell phones or calculators. The embedded Wi-Fi 6E capability will also be very beneficial for our designs over the next three to five years,” he explains.

Looking Toward the Future of Industrial Automation

In the future, AAEON expects that demand for edge AI will only continue to grow. Hardware that features on-board AI acceleration will become increasingly important amid mounting data processing requirements. AAEON, for its part, is more than ready.

“We foresaw that IoT was coming, and that there would be an age after that defined by artificial intelligence,” explains Huang. “In light of that, starting from 2016, our focus has been on creating high-performance embedded products with a small form factor. By doing this, we’ve enabled our customers to perform the necessary edge processing to support applications such as computer vision and autonomous mobile robots—and we plan to continue down this road.”

For industrial organizations looking to solve the challenges of AI-driven edge automation, COM Express module such as the COM-RAPC6 and NanoCOM-RAP provide the necessary performance, power efficiency, and network throughput. Deploying such hardware with assistance from vendors like AAEON can help businesses ensure they’re ready to make the most of what AI has to offer, both now and in the future.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Checkout with AI for Faster Service and Fewer Losses

Has your retail store considered switching to self-service kiosks? Are you worried about potential challenges such as “unexpected item in bagging area,” accurately identifying produce, or ensuring a smooth checkout process without constant staff intervention?

These are common obstacles on the path to digital transformation that both retailers and consumers face. But intelligent retail technology has the potential to significantly enhance the customer experience and streamline operations. Improving the employee experience and retaining skilled staff are additional benefits that can greatly impact your bottom line.

Matt Redwood, Vice President of Retail Technology at Diebold Nixdorf, a retail technology company, guides us through the landscape of retail technology. He discusses AI solutions for common retail inefficiencies, the importance of purposeful innovation, and the value of leveraging technology partners throughout the transformation journey (Video 1).

Video 1. Matt Redwood, VP of Retail Technology at Diebold Nixdorf, discusses how AI is transforming in-store retail operations and experiences. (Source: insight.tech)

What are some top challenges retailers face today?

Most retailers are struggling with the same challenges. And making sure that the in-store experience is as good as possible for their customers is a key one. Post-Covid, retailers are really investing very heavily in that again. But they’re also being squeezed both on the top line and on the bottom line—the cost of goods is up, the cost of freight, the cost of managing and running stores—and they have to find ways of driving efficiencies in the store while also delivering that great consumer experience. It’s a real balance between getting the economics of retail right and satisfying the needs of the consumer.

And competition is as high as it’s probably ever been in retail, which is good in certain aspects. It helps with pricing and keeping inflation under control, but on the flip side, if consumers have a bad experience in a store, it’s easy for them to flip to another brand.

How is AI being used to address some of those challenges?

Generative AI really took off in retail in 2023, and certainly some companies really rushed to an AI endgame, with this euphoric view that AI could replace all the existing technology within stores. I think sometimes we forget that although the technology may exist—forget whether it’s commercially viable or practical to deploy it—you have to have consumer adoption. If you don’t have consumer adoption, the technology is worthless. That’s what I call the hype curve.

What a lot of retailers are now doing instead is focusing in on their pain points with what we call point-solution AI technology, that is, specific AI deployed for a specific use case to solve a specific problem—technologies like facial recognition for age verification. For example, if you’re trying to buy a bottle of wine, you have to wait for a member of staff to approve your ID. And that wait is compounded by the fact that retailers are struggling to find staff. Using AI in that environment drives greater efficiency, it reduces that requirement on members of staff, and it boosts that consumer experience.

Another big one is anti-shrink technology and using AI to make it more difficult for those who are trying to steal. But it can also help when someone may have just been unfamiliar with a process or have genuinely made a mistake—making sure that that is being caught without making it a bad experience for that particular customer.

We’re also starting to see AI applied on top of existing technologies to make them more efficient, to make them easier to use, to close loopholes, and to boost the consumer experience. One example is in-store safety—using AI on top of CCTV networks to make sure fire exits aren’t blocked, say. Or heat mapping to understand the flow of consumers around stores—making that flow easier but also potentially commercializing that flow.

What is the best way for stores to implement AI?

The “build it and they will come” mentality does not work with retail technology. We track the consumer-adoption curve and we track the technology-development curve, and it’s important to find something broadly in the middle.

We always recommend starting with data. It’s very easy to be swamped by it—we call it paralysis by analysis. But if you can segment your data, it can provide a lot of insights. You can really analyze and understand: How is the store operating? Where is the friction within the staff journey? Within the consumer journey? You can then quantify the effect that that friction has. It builds the picture to say, “Okay, I’ve got a problem statement that I want to solve. It’s having this impact on consumers and staff, and this is the impact to my business.” And that’s relatively easy to calculate.

We are starting to see #AI applied on top of existing technologies to make them more efficient, to make them easier to use, to close loopholes, and to boost the consumer experience. @DieboldNixdorf via @insightdottech

The more problematic piece is then finding the right innovations to deploy in the store to solve for that issue. But starting with that data highlights where the biggest areas of inefficiency are and then provides the compass to point you in the direction of the right technology. It’s also then very easy to actually measure how successful that technology has been once it’s been put into the store.

Tell us more about matching the right technology to a specific problem.

At Diebold Nixdorf, we’ve really focused on three core solutions where the biggest friction points are. One is age verification, which I mentioned before. Facial recognition provides a much better experience for the consumer. It’s faster, and faster transactions mean that consumers are moving through the front-end quicker. That means fewer queues, and queuing is consumers’ biggest checkout bugbear. So we remove two of the biggest friction points associated with checkout with one piece of technology.

There are also technologies centered around the product, such as efficient item recognition at checkout—particularly in grocery for fresh fruit and vegetables. That is the second solution. And it’s not just for non-barcoded items like produce. In some environments, particularly in smaller stores, why should you have to scan the barcode when you could identify the item by its image?

And then, finally, shrink. Of course, the natural argument is that self-service is a natural place for shrink because it’s unmanned in a lot of environments. But with those who are maliciously trying to steal, even if we close all of the loopholes at self-service, they will find somewhere else in the store to steal from. We’ve really focused our AI efforts there on behavioral tracking. Once you can start to identify behavior, it doesn’t matter where within the store you deploy the technology. Of course, we focus on the front-end first: self-service checkouts and POS lanes. But then we run that same solution onto the CCTV network, and then we can identify shrink anywhere in the store.

Where does the human element come into play?

The human element is really, really important to self-service, and it’s quite often overlooked. Self-service is more about staff redistribution. Attracting and retaining staff is a big problem for retailers, so they have to use their staff wisely. And where self-service is playing a major role is in unlocking members of staff to interact with consumers where those consumers need the most help—finding an item, asking a question about it, navigating the store—places where it really makes sense to deliver that consumer experience. During Covid, retailers that had self-service had much greater flexibility of operations within their stores; post-Covid, self-service actually allows them to boost the level of consumer experience where it really counts.

Let’s go back to the challenge of preventing shrinkage. So, it’s relatively easy to identify if someone has stolen. What you then do in that scenario is more difficult. If someone is stealing maliciously, you don’t want to put your staff in danger or in an environment that they don’t feel comfortable with. You also don’t want to alienate or embarrass someone who has genuinely made a mistake. So we are very much putting the human element into the situation here; the situation will be dealt with differently depending on the use case of the theft.

If there’s an instance of shrink, an alert is sent to a member of staff. All the information is put in that person’s hands so that they can deal with the situation in the way that they see as appropriate. And staff training comes into play here. We have a number of great partners that work on staff training to give employees the toolkit they need. Then, when they approach that member of the public—and they’re approaching them knowing exactly what’s happened—they’re trained to deal with that situation in the most agreeable way possible. So the technology is only one-third of the actual solution; the human element is a massive part of it that shouldn’t be overlooked.

How is Diebold Nixdorf solving customers retail challenges?

As a solution provider that retailers work with to build out their technology—not just across checkout but all the way across the store—we quickly realized that it was unrealistic to think that we could have 20 or 30 different solutions—all in the AI space, all providing different use cases, but none of them talking together. So we work with a third party that has a very mature AI platform, and that becomes the backbone for anything the retailer wants to do within their store from an AI perspective.

We are the trusted partner, the integration partner. We will provide applications that can sit on top of that platform—like age verification, shrink reduction, item recognition, process or people tracking. But if there is a particular partner out there that is market leading in, say, health and safety, we can plug them on top of the platform, too. It doesn’t make sense for us to reinvent the wheel.

And what that means is that the retailer can build this ecosystem of AI partners, all plugged into a single platform, and the solution is very, very scalable. It will ultimately move us towards what we call intelligent store. It isn’t necessarily removing the physical touchpoints or removing the existing technology; it’s about providing intelligence to retailers.

Every device in the store is effectively a data-capture device—a shelf edge camera or a self-service checkout or a scanner—these are all data inputs. And that’s a two-way street: You can push data down, you can pull data back. The AI platform allows you to connect all of these together to create that intelligent store.

It does mean that there’s a huge amount of data available, but I think the retailers that are really going to advance quickly are the ones that work out what to do with it. Because it can and it should inform every single decision or direction that a retailer takes—how products are priced, where they are positioned within the stores, how stores are staffed.

What is the value of technology partnerships in making AI retail solutions happen?

We work very, very closely with Intel—not just on the AI topic but for our core platform itself. And not just on the solutions that we deploy into stores today but also on our development roadmap. And we follow the developments at Intel closely, too—where Intel is going with its solutions and how we can better integrate those into our solutions.

We work particularly closely with Intel on some of the scalable platforms. Retailers have technology requirements today, but—particularly with these AI topics—the amount of computing power that will be required in three or five or seven years will be very, very different from the requirements now. So providing retailers the ability to scale their technology to meet their future requirements is an absolute game changer.

Any final thoughts for those looking to incorporate AI in retail?

I would say, start with the data. Identify the business requirements or problems that you are looking to solve and then find the right provider that’s going to enable you to deliver against those requirements today, but that is also going to give you that longevity of scalability. It is a marriage, and you have to make sure that you’ve made the right choice.

Related Content

To learn more about AI in retail, listen to AI in Retail: Stop Shrinkage and Streamline Checkout and read New Retail POS Solutions Transform the Checkout Journey. For the latest innovations from Diebold Nixdorf, follow them on Twitter/X at @DieboldNixdorf and on LinkedIn.

 

This article was edited by Erin Noble, copy editor.

Securing the Edge with Hyperconverged Infrastructure and AI

Expansion of distributed infrastructure fundamentally transforms the cybersecurity landscape. Data generation and processing increasingly shift toward the edge. As a result, traditional centralized security measures are inadequate due to escalating complexities and scale of emerging threats.

To address these evolving demands, hyperconverged infrastructure is extending beyond traditional data center confines. This extension necessitates adoption of hardware that delivers data-center-class performance while enduring environmental challenges of edge locations.

“The dynamic and varied nature of edge environments requires a new approach to security, one that is more adaptive and intelligent,” explains Stéphane Duburre, Product Line Manager at Kontron, a leader in embedded computing technology. He points to the latest Intel® Xeon® processors and Intel® Arc GPUs as examples. “These advanced processors enable real-time edge AI security analytics, which are crucial for data protection and operational continuity in harsh edge environments.” 

Further complicating the network edge landscape, communication within industrial environments is transitioning to Time-Sensitive Networking (TSN), which supports deterministic messaging on standard Ethernet networks. This advancement facilitates seamless integration of OT and IT networks. But it also expands the attack surface for security threats, requiring a more sophisticated and robust security approach.

Adapting to New Edge AI Security Needs

To address these evolving demands, Kontron developed the ME1310, a high-performance multi-edge platform. Where harsh environments would cause other equipment to fail, the ME1310 exceeds thanks to a 22-core Intel Xeon processor rated for temperatures of -40°C to 65°C. “It sustains performance even under fluctuating or extreme conditions,” Duburre notes.

When more performance is needed, the ME1310 can accommodate two PCIe Gen 4 accelerators, including Intel Arc GPUs for AI acceleration. This adaptability allows for significant enhancements in processing power and speed—critical for applications requiring intensive computation and real-time data processing.

“The dynamic and varied nature of #edge environments requires a new approach to #security, one that is more adaptive and intelligent,” – Stéphane Duburre. @Kontron via @insightdottech

In applications that need high-bandwidth packet processing, the platform’s integrated hardware delivers up to 200 Gigabit Ethernets of HAL2 and HAL3 switching. With support for Precision Time Protocol (PTP) for TSN networks, the ME1310 facilitates data transfers across deterministic networks—maintaining security across increasingly integrated OT and IT environments.

By addressing these challenges, the ME1310 provides a compact, versatile solution that brings data center-level capabilities to the network edge, enabling organizations to navigate the complexities of modern network environments with enhanced operational security and efficiency. 

The Role of AI at the Network Edge

Hyperconverged platforms like the ME1310 lay the foundation for the transformative role of edge AI security. With its ability to learn from and adapt to network activities in real time, AI enables a new dynamic of immediate, autonomous responses to emerging threats. By continually analyzing data, AI significantly improves both the understanding and mitigation of evolving threat behaviors, thereby strengthening overall security measures, according to Duburre.

For AI to be most effective, it must be deployed directly at the network edge. This reduces latency significantly and decreases reliance on centralized data centers, which is vital for timely decision-making in environments where security is critical.

But deploying AI at the network edge introduces unique cybersecurity challenges that differ from traditional data center environments. These include heightened concerns over data privacy, increased vulnerabilities in security devices and network infrastructure, and the complexity of managing security protocols across dispersed and varied edge locations.

But “the integration of Intel Arc GPUs with Intel Xeon D processors enables robust edge AI security capabilities,” explains Duburre. This allows for advanced data analytics and real-time encryption and decryption at the edge.

In manufacturing environments, for example, the ME1310 can use AI to detect and respond to operational anomalies. Duburre elaborates, “Such capabilities allow for the immediate analysis of unexpected stoppages or irregular machine behavior to determine their cause—be it a potential cyberattack or a mechanical failure.”

The Future of Edge AI Security

Looking ahead, the role of hyperconverged platforms like the ME1310 in edge computing is poised to expand significantly. As more organizations leverage IoT and other advanced technologies, demand for localized, powerful computing solutions will continue to rise. Hyperconverged platforms are uniquely positioned to meet these demands, offering compact, versatile solutions that bring data center-level capabilities to the network edge.

For industry professionals navigating the complexities of modern network environments, platforms like the ME1310 can significantly enhance operational security and efficiency. By adopting these sophisticated solutions, businesses can ensure they remain at the cutting edge of technology, prepared to face the challenges of tomorrow’s digital landscapes with confidence and resilience.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

AI Technology Brings Gold-Medal Event Experiences

The focus of the 2024 Olympic and Paralympic Games in Paris is, of course, on the athletes and their excellence and dedication. It’s a rare opportunity for those of us who maybe sprint only to catch a bus or an elevator to track the athletes as they push the boundaries of the human body and spirit. A small number of fortunate spectators will be in France this summer to witness the spectacle in person; the rest of us will follow along at home on our electronic devices.

So, whose focus is on making that spectator event experience a smooth and fulfilling one—both abroad and at home? One of them is Sarah Vickers, Head of the Olympic and Paralympic Program at Intel, who will walk us through the process of leveling up event experiences through technology—the Olympics and Paralympics in particular. She’ll cover Intel’s involvement behind the scenes before, during, and after the Games; the crucial role of data in making decisions on the ground there; and how the experiences of Paris 2024 can inform not only the 2028 Games in Los Angeles but other kinds of entertainment events as well (Video 1).

Video 1. Sarah Vickers, Head of the Olympic and Paralympic Program at Intel, explores the latest technology powering the Games. (Source: insight.tech)

Can you give us an overview of Intel’s involvement in the Games?

It is the largest sporting event—and the most complex sporting event—on Earth, and it has billions of watchers around the world. So it’s a really exciting opportunity for us to demonstrate the leadership of Intel technology in a scalable way.

We think about integrating Intel technology to help with the success of the Games in a variety of ways. There are the really complicated operations involved in delivering the Games—moving athletes and fans and volunteers around, getting people from point A to point B. That’s complex in itself, but doing that over 17 days across so many sports is even another level of complexity.

There’s also the fan experience beyond the operational ease of getting around, which is more about the time they spend outside of the Games. The sports themselves provide great entertainment, but then there’s all that in-between time. What can we do to help that experience be even better for the fans?

Then there’s the broadcast experience for the billions of people watching at home, which has become more involved when you think about all the different ways people consume media now. So, we work with Olympic Broadcasting Services to deliver outstanding experiences based on Intel technology and applications.

How is that Intel technology being used behind the scenes for Paris 2024?

We start working with the International Olympic Committee—the IOC—and with the International Paralympic Committee and with the specific organizing committee—in this case, Paris 2024—years in advance. We need to really understand what we are trying to solve. Also, how can we take what we’ve done in the past and make it better? So, we’ve taken solutions that emerged from Tokyo in 2020 and improved them. And then, what are the new challenges that have evolved since the last Games?

The Games are also a really excellent grounds to demonstrate the whole Intel idea of “AI Everywhere,” where Intel AI platforms have the opportunity to change a lot of things. One good example is digital twinning, as in having a digital twin of all the event venues to understand in a 3D way what they are going to be like during the Games.

If you think about broadcasters, they really need to understand where camera placement is going to be and how that’s impacted by different things. If you think about the transition from the Olympic Games to the Paralympic Games, there are a lot of changes that need to happen for accessibility for the athletes and things like that. Digital twins make it possible to do those things in advance, rather than doing it as it happens and realizing that certain solutions don’t actually work. There’s also some reduction in travel, because you can work with a digital twin on your PC from anywhere.

“The Games are also a really excellent grounds to demonstrate the whole Intel idea of #AIEverywhere, where Intel #AI platforms have the opportunity to change a lot of things” – Sarah Vickers, @Intel via @insightdottech

Another use case that we’re helping out with from an operational perspective is just understanding the data. There are a lot of people behind the scenes at the Games—the media that’s on the ground, all the workforce—so we’re helping the IOC and Paris 2024 understand the people-movement factor to optimize facilities for them. That could be about having the right occupancy levels in a venue, about making sure people have the right entries and exits, and really using that data to make real-time decisions. That will also help inform the next Games, because those teams will have a base set of data to help them model and plan for the complicated situations involved there.

One final example is on the athletes’ side. This is the athletes’ moment; for some of them it’s the highest moment in their careers. So, what you want to do is make it as uncomplicated for them as possible, so they can focus on their performance and not think about the things that enable them to get to that performance—food, travel, and accommodations.

So, for these Games we’re implementing a chatbot based on Intel AI technology. It’s going to enable athletes to ask questions and get conversational answers about day-to-day things—like food, travel, and accommodations. And that chatbot will continue to get smarter as we get more answers and understand what’s working. I think it’s really going to be a game changer for athletes in Paris.

Walk us through the process of your involvement with the Olympics and Paralympics.

The first thing we say is: “What needs to be delivered? What are we trying to solve? What are we worried about?” There’s a set of expectations for every Games, but then there’s also that set of expectations for what we want to do that’s different from the last time.

And then we do an assessment and ask, “How can Intel technology help?” We work very closely with a number of partners to try to figure out that question. And then we develop a roadmap of solutions. Some of those solutions are delivered in advance: digital twinning, for example. The benefit of digital twinning is not during the Games; the benefit is really months before the Games. Then there are other solutions that obviously are for during the Games. Hopefully, during the Games themselves, everything goes smoothly, and we can just enjoy it and watch our technology shine. But, of course, we have staff on-site to make sure that everything goes off without a hitch.

What about after the Games, what happens next?

There’s so much data involved with the Games, right? There’s all this content—broadcast data, all the highlights. Then there’s all the data that we’re helping the IOC collect in order to understand people movement and things like that. And that data is definitely being used to create models to help plan the next set of Games, as I mentioned before, as well as other kinds of entertainment.

One of the really interesting post-Games use cases we’re working on is with Olympic Broadcasting Services around using AI platforms for highlights of the Games. We’ll be able to create highlights that just weren’t possible before, because they were all generated by people and a limited number of people.

But if you think about how we consume broadcast these days, we are much more demanding in our expectations; we want things that are a little more personalized now. And there are 206 different countries participating in the Games—multiple languages, multiple sports. Some of the bigger countries have traditionally dominated the highlights space, and certain sports are really important in those countries and others aren’t important at all.

So, what the AI highlights will be able to do is generate highlights that are really customized to the people in the places that are viewing them. The models will also learn over time and get smarter, and then the fans are going to get even better and more personalized highlights.

Can the Intel technology that’s used for the Olympics be applied to other sectors?

Almost every application that Intel has for these Games, there’s a use for it at other events but also beyond sport. The way we think about it is: “How does this demonstrate what we can do?” And then: “How does it scale?”

One example is a really fun application of AI platforms called AI Talent Identification. It uses AI to do biomechanical analysis to help fans understand which Olympic sport they’re most aligned with. The fan does a bunch of fun exercises, Intel mashes up that data, and then they get the result. But if you think about what that biomechanical analysis can do, this application can be used in a variety of ways to improve people’s lifestyles—physiotherapy, occupational health. And think about digital twinning: you’re seeing a lot of that in manufacturing, in cities. It depends on the goals, but these types of technologies can definitely benefit many outcomes.

What is the value of Intel’s partnerships and ecosystem during the event?

The Games are going to be a massive event, and in this post-pandemic era I think we’re all excited to see them come back into their glory. It’s very exciting, but it’s obviously very complex. Paris is a giant, complicated city without an Olympic Games or Paralympic Games, and so bringing it all together is going to be hard.

Also, AI—and technology in general—has gotten smarter and become more mainstream, and that has affected what the expectations are around it. But we can use all the data that’s generated by it to build complex and interesting models—the compute is possible now—and there are going to be a lot of different AI applications that Intel will facilitate throughout the Games.

But Intel doesn’t do things alone; strong partnerships are crucial. We really try to understand what the best solution is, and then we work with the appropriate ecosystem to help deliver it—other top Olympic partners as well as partners at the local level. Working with those trusted partners, Intel can help develop the solutions to deliver an amazing Games. We’re really excited to be a partner of the International Olympic Committee and the International Paralympic Committee to help make these Games the best yet.

Related Content

To learn more about technology powering event experiences, listen to Game-Changing Tech Takes Event Experience to the Next Level. For the latest innovations from Intel, follow them on Twitter at @Intel and on LinkedIn.

 

This article was edited by Erin Noble, copy editor.

Virtualization Opens Doors for Physical Security ISVs

The physical security market is booming, with customers eager to adopt AI, computer vision, and other emerging technologies. This gives systems integrators and independent software vendors (ISVs) an unprecedented opportunity to enter the market and distinguish themselves with software offerings.

Hyperconvergence makes this opportunity even more inviting. Rather than relying on multiple, separate hardware components, hyperconverged architecture consolidates virtual computing, network virtualization, and software-defined storage into a single integrated system. These systems are more robust, easier to manage and deploy, more cost-effective, and less energy-intensive.

Consequently, the market is moving away from the single-purpose hardware of specialized surveillance systems and toward standard Intel® processor-based servers and appliances that can run multiple virtualized workloads. This shift brings hardware costs down and boosts the value of software.

The challenge for system integrators and software providers is to take advantage of these technology and market dynamics.

“Most big video surveillance solution providers bundle their software and hardware,” explains Tom Larson, President at Velasea LLC, a system builder specializing in hardware and computer vision. “This limits the opportunities to add value with additional software. Investing in hyperconverged hardware tends to be similarly unappealing.”

“Many companies involved with AI and computer vision don’t want hardware on the books,” Larson says. “That’s why we created a virtual OEM program that allows software experts to stay out of the hardware game.”

Opening Up the Physical Security Market

Originally founded as a spinoff of an IT distribution company, Velasea has evolved into a full-service technology aggregator that specializes in integrating multiple systems and architectures into a single appliance.

“Our goal is to help software companies enter the physical security market,” says Jimmy Whalen, CEO of Velasea. “Our appliances enable them to focus on software rather than hardware while ensuring those appliances are easy for their customers to deploy and upgrade.”

As part of this philosophy, Velasea works closely with its technology partners to streamline delivery of hyperconverged systems.

“There are challenges with virtualization and new architectures in the physical security space that Velasea is uniquely qualified to address,” Larson explains. “One is hardware consolidation, which happened a decade ago in IT but is still in the early stages in physical security. This can present challenges for security integrators who don’t have our background in IT infrastructure.”

Velasea builds appliances to de-risk projects. End users get something that works, and ISVs get an appliance with well-understood performance. More important, that appliance combines everything into a single hyperconverged system—so businesses can gain all the benefits of hyperconvergence without needing to think about underlying complexities.

Gaining easy access to hyperconverged systems is a boon for companies looking to expand into the surveillance space. Virtualization gives them the flexibility to test, develop, and roll out new features and solutions rapidly, responding to market demands with agility. What’s more, virtualization unlocks new levels of scalability and efficiency, enabling software companies to integrate cutting-edge technologies into their solutions more effectively.

Gaining easy access to hyperconverged systems is a boon for companies looking to expand into the #surveillance space. @velaseasystems via @insightdottech

New Path to Hardware Virtualization

Velasea collaborates with partners such as Quantum—a company that specializes in video and unstructured data—to bring the Quantum Unified Surveillance Platform (USP) to market. The USP solution consolidates compute, storage, and networking resources into a single virtualized solution capable of hosting not just video management systems but also a range of other security applications.

Supported by a subscription-based licensing model, Quantum USP can run on any hardware that incorporates Intel® Virtualization Technology (Intel® VT), which allows multiple workloads to operate simultaneously on a single shared hardware resource. This hardware-agnostic approach not only provides security integrators with unmatched flexibility in terms of infrastructure and architecture but also greatly reduces complexity and total cost of ownership.

Leveraging the Power of Partnerships at the Edge—and Beyond

Velasea is exploring new use cases around edge computing. For example, Velasea recently helped an OEM develop a Power over Ethernet (PoE) switch based on the 12th generation Intel® Core processor that incorporates both AI and a Video Management System (VMS). By consolidating these functions onto a single hyperconverged platform, Velasea helped the company gain a competitive advantage with a more capable and efficient solution.

Alongside smarter appliances, collaboration like the one between Velasea and Quantum can support applications well beyond video surveillance—and even outside the bounds of physical security. In addition to broadcasting, some potential markets identified by Velasea include retail, logistics, and public safety. That, according to Larson, is only the beginning.

“There’s a new generation of software emerging that is changing the game, and it’s going to change rapidly,” says Larson. “People are writing better code and utilizing systems better, and the result is that we’re seeing the entire landscape of physical security evolve. We partner with Intel, integrators, and software companies to be part of that evolution, developing optimized solutions to help businesses solve ‘last mile’ problems faster.”

“Our mission is to be a trusted partner for ISVs, providing them with the solutions and expertise necessary to support their customers,” he concludes. “It’s our partnerships that make this possible.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

AI in Retail: Stop Shrinkage and Streamline Checkout

The retail landscape is riddled with challenges—staffing shortages, supply chain disruptions, and inflation. But there’s a solution with a powerful ROI: AI in retail.

Imagine self-checkout with flawless image recognition, streamlined transactions, and reduced labor costs. AI can also free employees up so they can provide more meaningful customer interactions and boost customer satisfaction across the store.

This podcast dives deep into how AI transforms retail operations and enhances the in-store experience. Discover how AI in retail improves efficiency, cuts costs, and strengthens customer loyalty.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guest: Diebold Nixdorf

Our guest this episode is Matt Redwood, Vice President of Retail Technology at Diebold Nixdorf, a financial and retail technology company. Matt is a strategic business and transformational retail technology leader. At Diebold, he is responsible for ensuring top retailers can access high-quality hardware, software, and services.

Podcast Topics

Matt answers our questions about:

  • 1:59 – Different challenges retailers face today
  • 4:27 – Real AI benefits for in-store experiences
  • 9:39 – Where retailers can start implementing AI
  • 14:46 – The human element in AI transformations
  • 23:16 – Real-world customer use cases
  • 28:46 – Technology partnerships making AI in retail possible
  • 31:11 – Final thoughts and key takeaways

Related Content

To learn more about AI in retail, read New Retail POS Solutions Transform the Checkout Journey. For the latest innovations from Diebold Nixdorf, follow them on Twitter/X at @DieboldNixdorf and on LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to “insight.tech Talk,” formerly known as “IoT Chat,” but with the same high-quality conversations around IoT, technology trends, and the latest innovations you’ve come to know. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today I’m joined by Matt Redwood, Vice President of Retail Technology at Diebold Nixdorf. Hey, Matt, thanks for joining us.

Matt Redwood: Hi, Christina. It is great to be here speaking with you today.

Christina Cardoza: So, for those of our listeners who are not familiar with Diebold, what can you tell us about the company and what you do there?

Matt Redwood: Diebold Nixdorf is a technology company of two halves. We provide banking systems to the world’s largest banks, and we provide retail technology to the world’s largest retailers. I’m responsible for retail technology. We provide hardware, software, and services to most of the 25 top retailers globally, as well as quite a few tier-two, tier-three retailers.

And we generally cover front-end technology, which we’ll go into more detail on, software and enterprise software. And then we provide most of the services—break/fix and help desk services—to retailers to make sure all their technology is up and running for the maximum time possible.

Christina Cardoza: Great. And obviously we’ll be focusing on the retail aspect of Diebold Nixdorf today. We’ll have to get someone else on a later podcast to talk about the financial aspects of the company.

But the last time we spoke with you, Matt, it was for an article on insight.tech, and we spoke about POSs transforming retail checkouts to improve customer experiences in stores. But customer experiences—I think that’s just one pain point that retailers are facing today, one challenge. So, that’s where I wanted to start the conversation off today. What are the different challenges retailers face today, in addition to customer service, in stores?

Matt Redwood: So, it’s a bit of a tough time for retailers. And I think, regardless of what sub-vertical of retailer you are in, I think most retailers are struggling with the same challenges. So, on one side, as you said, customer experience is key—the desire or drive to make sure that the in-store experience is as high as possible with this ever-changing horizon or landscape amongst consumers, that their expectations continue to rise. So, the horizon of expectation continues, and retailers are really chasing after that. And we are really starting, post-COVID, to see retailers really investing again very heavily in that in-store experience, which is great to see.

On the flip side, on the top line and bottom line, they’re being squeezed. I think you can read in the press global and economic trends that are driving the cost of goods up, the cost of freight up, the cost of managing and running their stores up. So, their top line is being squeezed, their bottom line is being squeezed, and they must find ways of driving efficiencies in the store while also delivering that great consumer experience. It’s a real balance between getting the economics of retail right, as well as satisfying the needs of your consumers.

And competition is as high as it’s probably ever been in retail, which is good in certain aspects. It helps with pricing and keeping inflation under control. But on the flip side it means that consumers are very flippant in terms of where they get their experience from and where they shop. If they get a bad experience in a store, it’s easy for them to flip to another brand and get a better experience of potentially better products, better prices. So, it’s a very dynamically changing environment, very difficult one for retailers today.

Christina Cardoza: I’ve seen a lot of retailers start adding new technology, more intelligent technology and sensors, to be able to do some of these things: collect data at the edge in real time so they can make decisions as they’re happening. A lot of this is being powered by artificial intelligence, and I think we’re in a stage or a point today in the industry where AI is everywhere, and everybody’s trying to use it and get the benefits from it.

So, from your perspective, how is AI being able to address some of those challenges that you talked about, and what’s the reality of it? What are the real benefits that are coming? Because I feel like sometimes there’s hype, but where can we start using and getting actionable insights?

Matt Redwood: I think 2023, for most people, will be known as the year of AI. It’s where generative AI really took off in retail, and we started to see more and more AI applications in the retail market. And certainly, some companies really jumped to what I would consider the end goal of AI—which is completely changing the technology landscape, completely changing the customer journeys, the staff journeys, how you operate and run your stores—with this kind of euphoric view that AI could remove all technology that existed within stores.

That’s what I call the hype curve. We’re coming through the trough and we’re going back up again, in that a lot of people realize that that technology, although fantastically advanced, was probably quite a way off being realistically deployable en masse. The cost of the technology was high, there were limitations in terms of the size of the store and the amount of products and the amount of consumers. So, trying to take that technology and apply it to retailers today wasn’t applicable.

So, what we are seeing, and what a lot of retailers have done, is kind of take stock of the situation, re-address what’s really important, focus in on the pain points, and then really go, again, with what we call point-solution AI technology: so, specific AI deployed for a specific use case to solve a specific problem, but is very much for that particular use case. And we’re starting to see more and more of these solutions being trialed across retail stores, not only in grocery.

And the possibilities are really—they are bountiful, and they’re kind of endless. And some of the examples that we’re seeing are everything from health and safety in store—using AI on top of CCTV networks to make sure fire exits aren’t blocked or there’s not foreign objects or liquid spill on the floor where someone might slip over. We’re using it for heat mapping to understand what is the flow of consumers around stores—how do I make that flow easier, but also how do I potentially commercialize that flow?

We’re seeing AI on top of existing technology—so, something very close to my heart: self-service. We’re starting to see more and more AI being applied on top of existing technologies to make them more efficient, to make them easier to use, to close loopholes, to boost the consumer experience. So, technologies like facial recognition for age verification.

I think we’ve all been in the situation where we’re trying to buy paracetamol or a bottle of wine, and you must wait for a member of staff to come over and approve your ID. That’s been compounded, the effect of that situation, by the fact that retailers are struggling to find staff. So now I’m having to wait a little bit longer to have a member of staff be available to come and approve my ID. Using AI in that environment drives greater efficiency at the frontend. It reduces that requirement on members of staff, and it boosts that consumer experience.

We’re also seeing technologies centered around the product. Item recognition, really, really taking off. Not just for non-barcoded items—where we’ve seen fruits and vegetables selection—but also all item recognition. And in some environments, particularly smaller stores, why should you have to scan the barcode when you can identify the item by its image? So that’s an exciting technology.

And then finally, something that we’ve been working on over the last 18 months, which is anti-shrink technology using AI. Obviously shrink is something that’s really gone through the roof in a lot of retail environments, driven by the cost-of-living crisis. And we are now working with a lot of retailers; we’ve got 54 different retailers we’re working with on anti-shrink technology in one form or another to try and close those loopholes and make it more difficult for those that are maliciously trying to steal, making it difficult for them to be able to steal. But also those that may have just been unfamiliar with a process or genuinely have made a mistake. Also making sure that we are catching that, without making that a bad experience for that particular consumer.

Christina Cardoza: It’s interesting; in the beginning of your response you mentioned how retailers, they were adding this technology to really transform everything, and they were sort of jumping to the end. And especially when you’re implementing artificial intelligence, which has so many connotations with it, so many misconceptions. It’s interesting, because I feel like these things need to be gradually introduced to consumers for them to be able to accept it, to understand it, to use it.

I can’t tell you how many times I’ve been in self-checkout, where we’re using AI or computer vision, and I can’t even put an item on the scale after I’m done scanning it because it needs to be in a bag, or I can’t bag it yet because of the bag weight. It’s just so complicated.

I know every retailer has different challenges and different areas of entry, but would you say there is an easier place of adoption happening right now to adding some of this intelligent technology? And then not only easier to adoption for consumers and for the store, but—like you were talking about the facial recognition—I know consumers have privacy concerns around that. So how can stores easily implement this that makes the most sense for consumers and for themselves and their business?

Matt Redwood: Sure. So, complex question. I’m going to break it down into parts. So, when we talked about retailers and some technologists rushing to that endgame, it really was about trying to boil the ocean with AI to try and completely change the landscape of retail. And I think sometimes what we forget is although the technology may exist, forget whether it’s commercially viable or practical to deploy it, you have to also have consumer adoption. If you don’t have consumer adoption, no one will use the technology in it. It’s worthless.

So we very much, we track the consumer-adoption curve, and we track the technology-development curve, and it’s important to find something broadly in the middle of those two in terms of what’s the right technology; what’s the right innovation and technology; why am I deploying it? Making sure consumers adopt it, but, crucially, making sure that it solves a need and it solves a business or a consumer desire. The “build it and they will come” mentality does not work with innovation, and it doesn’t work broadly with retail technology.

Consumers are savvy, and retailers are much, much more savvy in terms of deploying the technology. It must deliver. So we always recommend starting with data. A lot of people talk about data; there’s a lot of data that exists; it’s very easy to be swamped by data. We call it paralysis by analysis. There’s too much data out there. But if you can really segment your data to understand—if I’m looking at my transactional process or my customer journey—making sure that I’m looking only at the data that relates to that and highlighting the problems.

I’d say 98%, 99% of our customers that we work with now, we actually work well on a consultative basis to actually really deeply understand their stores, how they’re being run, and how their consumers shop in their stores. And the data provides a lot of insights to that. So really understanding and analyzing: How is the store operating today? Where is the friction associated with the staff journey or the consumer journey? Understanding and quantifying the effect that that friction really then builds the picture to say, “Okay, I’ve got a problem statement I want to try and solve. It’s having this impact on consumers and staff, and this is the impact to my business.” And that’s relatively easy to calculate.

The more problematic piece is then really finding the right innovation to solve that. And very much we try and put the consumer and the staff journey at the center of everything that we do. If it doesn’t provide value for the consumer, if it doesn’t provide value for the members of staff, and it doesn’t provide value for the retailer—that triangle of value is at the center of everything that we do. And if it’s not ticking all three of those boxes, we don’t put it into the range and we don’t put it into the solutions or the stores.

Starting with that data is a bit like the treasure map. It highlights where your biggest areas of inefficiency are and then provides the compass to kind of point you in the right direction of what’s the right technology that you should be deploying to the store to actually try and solve that particular issue. And when you break it down like that, and we start thinking about this kind of AI-boil-the-ocean vision, we start thinking about individual point solution, it becomes much easier because it’s much more manageable to deploy from a technology perspective, it’s much easier to develop a solution that works for a particular use case or problem that you’re trying to solve.

But it’s also then arguably very easy to measure how successful it’s been once you put it into the store. The difficulty then comes is what you don’t want to do is collect a whole group of point solutions that don’t talk to each other, and it becomes very, very difficult to scale. Finding the right AI platform that allows you to scale all of these point solutions on a singular platform is really, really important.

Christina Cardoza: Yeah. I love one thing that you said, which was basically, if it’s not solving a problem or if it’s not benefiting the customer or the business, then don’t do that. I feel like that’s a major problem that we have with implementing technology and seeing shiny new things. Let’s just add it to add it, but why are we adding it? It’s not going to get you a return on investment, and it’s not going to help your business if it’s not really doing anything for you. So, I think that that was a great point.

I want to come back to also that facial recognition example again—how obviously I think we’ve all dealt with self-service checkouts or checkouts where you’re scanning something, it doesn’t recognize it, you need a human cashier to come and help you, and that just bottlenecks the entire process.

But there seems to be a lot more self-checkouts in the store. How does the role of the employees come into this? I know, talking about the consumer misconceptions that they have, there’s a lot of misconceptions that this is going to replace employee jobs, and especially when you see that it—there’s not a lot of cashiers on the floor anymore. So where does the human element come into play with some of these?

Matt Redwood: So, the human element is really, really important to self-service, and it’s an element that’s quite often overlooked. If you look at the evolution of self-service, self-service was originally designed as a POS, an attendant till replacement, to ultimately remove the cost of the staffing from stores. But self-service has been around for 20, 25 years now, and the drivers for deploying self-service are very different today compared to 20, 25 years ago.

I’d say 100% of the retailers that we deal with are either putting in self-service—and they might be on their second or third iteration of self-service because they’ve been in that business for a while—or they’re putting self-service in for the first time. A lot of retailers outside of grocery are just trying self-service for the first time. The approach is very, very different, and it’s very much less about removing staff from the equation, more about staff redistribution. The inability to attract and retain staff in retail is a real big problem for retailers, so they have to use their staff wisely. And where the consumers value the staff interaction the most is where they need it, and where they need it is where they generally they need help either navigating the store, finding an item, asking a question about a particular item, or just general assistance.

What self-service is really playing a major role in retail today is it unlocks that member of staff. So I would say to anyone that looks at self-service and says, “Oh that’s going to replace people’s jobs,” it’s not; it’s very much about labor redistribution now. It frees up a cashier that could be sat behind a till for a 12-hour shift to be up on their feet, engaging with consumers shoulder to shoulder in the aisle where it really makes sense to deliver that consumer experience.

Particularly through Covid we saw retailers that that had self-service had much greater flexibility of operations within their store. Post-Covid we’re now seeing that that allows them to boost the level of consumer experience where it really counts. Obviously, there’s always been friction associated with self-service and the adage of “unexpected item in the bagging area”—all of those common friction points perceived with self-service, they’re starting to really drain away.

A lot of focus has been put on fine-tuning and making sure that the base technology works to a much, much more acceptable level. And we’re now seeing self-service that’s very efficient, that generally most of the time you can sell-through a transaction with no intervention, no requirement for a member of staff to come over. We are now in the fine-tuning era of self-service, and why I say fine-tuning is we’re really looking for that last 5% or 10% of efficiency gains.

So, Diebold Nixdorf, we’ve really focused on three core solutions initially out the bag, and those three core technologies have been developed because we identified via the data where the biggest friction points were. So, age verification: 22% of interventions broadly are age related. That’s a big number. If we can use facial recognition to identify the age of the consumer and remove that validation process that’s happening—A, much better experience for the consumer; B, it means faster transactions. Faster transaction means less staff requirement at the till, but it also means that consumers are moving through the frontend quicker.

So that means less queues. Less queues—queuing is the biggest bugbear of consumers when they get to checkout. So, we’ve removed two of the biggest friction points, associated with checkout with one piece of technology. Item recognition, particularly in grocery for fresh fruit and vegetables, was another area of frustration from a consumer perspective. But also inefficiency from a retailer perspective: spending 20, 30, 40, 50 seconds, trying to find the type of apples that I’m looking to buy is frustrating, but it’s also time consuming. So, using item recognition to identify those apples so the consumer doesn’t have to run that process again. Good consumer experience, great productivity gains.

And then finally shrink. We touched on it a little bit earlier, but obviously retail shrink has really gone through the roof, and I think a lot of retailers are battling to really understand: where is there shrink happening? So of course, the natural progression in that argument would be to say, well, self-service is a natural place for shrink because it’s unmanned in a lot of environments.

But what we’re actually finding is: There’s two different types of people that steal. There are people that maliciously try and steal and those that have just made a mistake and it’s genuinely unmalicious. And how you treat those two individuals has to be dealt with very, very differently, because you don’t want to alienate or embarrass the consumer that’s genuinely made a mistake.

For those that are maliciously trying to steal: unfortunately, if we close all of the loopholes and make it impossible to steal at self-service, they will find somewhere else in the store to go and steal. So, we’re in this kind of Whack-A-Mole-type environment, where we’re trying to close all the loopholes as quickly as possible. We’ve really focused our efforts on AI with behavioral tracking. And the reason why we use behavioral tracking is once you can start to identify behavior, it doesn’t matter where you deploy the technology within the store, you can identify malicious behavior and that shrink environment.

We very much focus on the frontend first: we’re deploying shrink onto self-service checkouts and onto POS lanes. But the idea is that the next natural evolution is that then run that same solution onto the CCTV network, and then we can identify shrink anywhere in the store. The human element of this is really important because it’s relatively easy to identify if someone has stolen. What you then do in that scenario is a difficult situation.

What you don’t want to do is alienate a consumer that might have non-maliciously stolen. If they’re maliciously stealing you also need to deal with that in a particular type of way, but you also don’t want to put your staff, your cashiers in your store: A, in danger; or B, in an environment that they don’t feel comfortable with. So, we are very much putting the human element back into this, that, depending on the use case of the theft, we will then deal with that situation differently.

But what we will always do is put the information in the members of staff’s hands so that they can deal with that situation in the way that they see as appropriate. So with all of our shrink solutions—whether it’s on self-service checkout or POS—once the shrink instance has been identified, an alert is sent to a member of staff’s wearable technology—whether it’s smartwatch or tablet or phone or even their POS lane—they’re notified that there’s a shrink instance that’s happened, they know where it’s happened, and they can even review the video clip.

So now they’re empowered that they know what’s happened in that situation, they know what to look for. And then staff training really comes into play here, and we have a number of great partners that we work with on staff training who actually work through these scenarios to give the staff members the toolkit so that when they approach that member of the public and they’re approaching them knowing exactly what’s happened, they’re trained to be able to deal with that situation in the most agreeable way possible—to disperse any aggression or any risk that might be associated, but also to make sure it’s a good experience for that end consumer. So, the technology is only one-third of the actual solution; the human element is a massive part of it that shouldn’t be overlooked.

Christina Cardoza: And I think the change in roles and responsibilities for cashiers to being able to have more meaningful interactions with customers—that’s not only benefiting the customer experience, but that’s also benefiting the employee experience as well, maybe keeping employee retention. I was a cashier in college, and I can tell you that is a tedious and redundant process. I would have dreams of just scanning food and shouting out numbers. And it’s not only retail shrink and loss—I think it’s not only with malicious actors or by accident—but sometimes as a cashier I would hit the wrong number just because I was on autopilot going repeat. It was an error-prone process. So, I can see that helping it as well.

You mentioned that to really be able to be successful you need an AI solution that connects all of these together so that this is not happening in silos and the data is actually actionable. Obviously, we’re talking to Diebold because you guys are a leader in this space. So, I’m curious to hear how you are helping customers—if you have any real-world examples or case studies that you can share with us.

Matt Redwood: Yeah. And I’ll be completely honest: we fell into the most obvious trap, looking back at our journey on AI. We’ve been working on this now for two and a half years, nearly three years. And we originally, we went out to market to try and find the best solutions to solve these three use cases, but what we quickly found were there was lots of different competing technologies. There were a lot of potential third parties that we could have worked with, but the underlying technology was the same.

And we quickly realized that actually as a solution provider who retailers work with to actually build out their technology—not just across their checkout but all the way across the store—it was unrealistic to think that we could have 20 or 30 different solutions, all in the AI space, all providing different use cases, but none of them talking together.

So, we actually kind of paused our program and redesigned our go-to market strategy, which was very much focused on providing an AI platform, and we work with a third party in this space who have a very, very mature AI platform. We’re entering the retail market and didn’t necessarily have the applications to run on top of it. So, we’ve worked with them to actually develop out these three applications as a starting point in the AI space.

But the nice thing about the AI platform is it effectively becomes the AI backbone for anything the retailer wants to do within their store from an AI perspective. This means that we can really satisfy our openness. It’s an ethos that we drive in our product strategy, which is openness of software. And what we mean by that is we provide the building blocks for retailers. We are the trusted partner, we’re the integration partner, but if there’s a particular third party out there who has got the market-leading solution in a particular area, it doesn’t make sense for us to go and reinvent the wheel.

So, when we talk about openness, the ethos that we take to our retail customers—but also that permeates through our R&D and product-management ethos—is to very much work with the best of breed within the market. And our strategy is to basically provide this AI platform for retailers. We will provide applications that can sit on top of it—like age verification, shrink reduction, item recognition, process or people tracking—but if there is a particular partner out there that is market leading in health and safety, we can plug them on top of the platform.

And what that means is the retailer can build this kind of ecosystem of AI partners, all providing best-of-breed solutions, but, critically, they’re all plugged into a single platform. So, they utilize the same business logic; they utilize common databases, like item database or loyalty schemes and things like that. That makes the solutions very, very scalable. It makes them much easier to manage, but it also means that they’re all talking to each other.

And the beauty of AI is it is self-learning to a certain extent. So, the more applications that we plug into this, the more physical touch points that we have in the store, the more information is flowing through the platform and then the quicker it can develop and the quicker it can learn. So, it’s very much a self-perpetuating solution that we’re very much at the beginning of this journey.

As I say, we’ve got about 54 different customers using AI in one form or another. But we very much see this as a much, much longer journey, where we’re starting to build an ecosystem of solutions that will ultimately move us towards what we call “intelligence store.” And intelligence store for us isn’t necessarily removing the physical touchpoint or removing the technology; it’s about providing intelligence to retailers.

And what I mean by that is every device that sits in the store is effectively a data-capture device. And that’s a two-way street: you can push data down to them, you can pull data back. So, whether it’s a shelf edge camera, or whether it’s a staff device or a self-service checkout or a scanner or a screen—these are all data inputs. There might be AI point solutions associated with them, but the AI platform allows you to connect all of these together and create an intelligent store, where intelligence really permeates every single area of the store.

It does mean there’s a huge amount of data available, but I think the retailers that are really going to advance quickly are the ones that work out what to do with this data. Because it can and it should inform every single decision or direction that you take as a retailer—whether it’s how I price my products, where my products are positioned within the stores, how I afford loyalty systems to the consumers, how I staff my stores, how I operationalize them—but also what technology exists within the stores.

So, data—it’s a cliché—but data will form the basis of every single decision that we make—from either a technology perspective, solution-provider perspective, but also from a retail-operations and a store-design perspective as well. So, it’s a really, really exciting journey that we’re on.

Christina Cardoza: Absolutely. And I think it’s really important to find a solution provider that is willing to work with others in the industry and leverage their expertise. I think that helps prevent vendor lock-in; it allows you to take advantage of the latest technologies and enables you to innovate faster, working with some of the best partners in the market. So, speaking of best in the breed, insight.tech and the “insight.tech Talk,” we’re obviously sponsored by Intel, so I’m curious if there’s anything you can tell us about that partnership and the technology that you use to make some of your AI retail solutions happen.

Matt Redwood: Absolutely. So, we work very, very closely with Intel—not just on the AI topic but from our core platform itself. Intel very much underpins a large part of our portfolio, so we have a very, very close working relationship with them—not just on the solutions that we deploy into stores today but also our roadmap on our development. We work very closely with Intel on their developments: where they’re going with their solutions and how we can better integrate them into our solutions to give retailers better solutions but also much, much greater flexibility for the future.

And I think a good example of that is probably the speed of development of technology. If you think about traditional point of sale or self-service checkout, if you go back five or 10 years, a retailer will make a choice for that particular type of technology and that would sit in that store for five, seven, 10 years sometimes, as long as the technology is running. The speed of development of technology is in increased immeasurably. The expectations of consumers have also increased immeasurably. And so balancing those two is really, really key.

Where we work very, very closely with Intel is on some of their scalable platforms. So, knowing that retailers have a requirement today—but particularly with these AI topics—the amount of computing power that will be required in three or five or seven years will be very, very different to the requirements today. So, providing retailers the ability to scale this technology so that whatever they deploy today is not throw-away in two years’ time. That they can evolve it and scale that technology to meet their technology requirements at that particular time is an absolute game changer. And that’s something we’re working with Intel very, very closely on.

Christina Cardoza: Yeah, absolutely agree. Things are changing every day: not even five, six, seven years from now, but five weeks from now things can be completely different. So being able to scale and to adapt is especially important in today’s landscape.

Well, it’s been great hearing about all of these solutions, especially how Diebold is helping retailers from end to end with the item recognition, facial recognition, and retail shrink. We are running out of time, but before we go, Matt, I’m curious if there are any final thoughts or final takeaways that you want to leave our listeners with today.

Matt Redwood:  I think there’s a lot of misconception, particularly around AI. What I would say is: start with the data. Identify the business requirements or the problem that you are looking to solve, and then find the right provider that’s going to enable you to deliver against those requirements today but also gives you that longevity of scalability. Because AI is a journey; it’s very much a solution that learns over a period of time. So, choosing your solution provider is extremely important, because it is a marriage, and it is a long marriage, and you have to make sure that you’ve made the right choice. So, use the data to help inform those decisions, and, yeah, it’ll be very, very exciting to see where AI and retail technology goes, over the next two, three, five years.

Christina Cardoza: Yeah, absolutely. And I would say also: choose a partner that you can trust and is transparent about how they are using the data. Like with the age verification for instance, you want to make sure that that data isn’t being saved or that anything going into that—that system is going to protect your privacy and your information.

Matt Redwood: Absolutely. Data privacy is absolutely key and is a very, very careful consideration when you are designing or choosing the solution that you want to deploy to stores.

Christina Cardoza: Excellent. Well, thank you again for joining us. I invite all our listeners to visit the Diebold Nixdorf website: see how else they can help you in the retail space. As well as insight.tech, where we’ll continue to cover partners like Diebold and the latest trends in this space. Until next time, this has been “insight.tech Talk.”

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Assisted Checkout Boosts Customer Satisfaction

Self-checkout seemed like such a great idea: Let grocery and convenience store customers skip the lines, scan and pay for their own merchandise, and be out the door—freeing employees for other duties. But reality doesn’t always measure up to the vision. Lines for self-checkout often exceed those for staffed lanes. Customers take longer to check items than experienced cashiers, and may become confused or make mistakes, requiring them to wait for assistance. And for retailers, shrinkage is a major pain point.

Some stores have experimented with autonomous (cashierless) “just walk out” payment systems, but these stores may offer only a limited selection of goods and require significant technology investments.

Computer vision-assisted checkout—backed up by store personnel—may provide the happy medium both retailers and their customers seek. Fast and accurate, it eliminates the need for item-by-item scanning and allows stores to preserve the “service with a smile” tradition that keeps people coming back.

Smoother Retail Checkout Solution

Retailers are as frustrated as their customers by service delays, but chronic labor shortages and rising wages often prevent them from hiring additional staff, says Aykut Dengi, CEO and Co-founder of RadiusAI, a computer vision company focused on AI technology solutions for retailers.

To get a better handle on wait times, retailers started measuring how long it took for customers to get to a checkout stand and complete their transactions. The numbers weren’t good.

“They asked us if we could provide technology to solve the problem. So, we created ShopAssist,” Dengi says. ShopAssist replaces checkout-counter scanners with computer vision cameras, which work much faster and require minimal labor from customers or cashiers.

With ShopAssist, customers unload a basket of goods onto the counter. In less than a second, the cameras recognize each item within the group. An itemized bill showing prices, product images, and total cost is displayed on both the customer and cashier screens. The customer is then free to complete their transaction on their own. If they want to use a coupon or purchase an item requiring an ID, ShopAssist immediately informs the cashier for assistance. The interaction between the cashier and customer is face to face with ShopAssist, as in traditional cashier checkouts. This helps create a favorable experience for customers and employees alike.

In addition to speeding transactions, the computer vision system helps prevent shrinkage, a growing problem for retailers, especially at self-checkout. For example, a person may take a barcode sticker from an inexpensive item and place it on a higher-value product. This technique won’t work with ShopAssist, which can read barcodes but pays more attention to a product’s image—just as a human would do. Computer vision also prevents problems from customers who neglect to scan items or scan them improperly.

Improving Retail Inventory Management

Shrinkage and scanning errors not only cause retailers to lose revenue but also lead to inaccurate inventory tracking. Merchandise without barcodes, such as food service items, are particularly problematic. For example, some stores might offer a variety of grilled food items, such as hot dogs, taquitos, and burritos. These items likely have varied costs and, if not accurately charged to the customer and accounted for, the store will lose profit and inventory will be incorrect. Drink dispensers also cause issues when customers use soda cups for iced coffee, for example.

As convenience stores look to increase the popularity of their offers, freshness and availability are critical. “Prepared food is a growing profit source at convenience stores, but if you cook the wrong items, many end up in the trash,” Dengi says. “ShopAssist visually identifies items correctly, lowering food waste and contributing to the bottom line.”

ShopAssist’s flexibility in product tracking allows merchants to include a wider variety of goods than technology that limits the types of items that can be sold. Autonomous checkout also restricts the way products can be displayed and are complicated and costly to install. “Placing cameras on every shelf is a formidable expense,” Dengi says.

The ShopAssist software platform relies on the performance of Intel processors for visual tasks and Intel GPUs for faster inferencing—helping to identify a wide variety of merchandise quickly, including items the system hasn’t seen before. Trying new products is important to retailers. “A typical store brings in a hundred new items a week, which often include local or specialized vendors and hometown favorites,” Dengi says. “ShopAssist easily captures new images of products not yet introduced to the point-of-sale system and federates them across the enterprise, saving time and expense.”

When a product is first introduced, the cameras read its barcode in addition to capturing its image and the technology learns to associate the two. RadiusAI uses the Intel® OpenVINO toolkit to continually optimize ShopAssist processes, including product recognition.

RadiusAI also works with retailers and systems integrators to tailor ShopAssist hardware or software to individual needs. For example, in addition to enabling computer vision, the Intel processors can be used to run other devices in the store.

“Retailers are adopting more edge solutions, and they’re familiar with using Intel hardware,” Dengi says. “For example, they can start with using the ShopAssist system for checkouts and later decide to manage their ovens on the same computer.”

Adding the RadiusAI solution called ShopAssist Pulse, retailers can expand the power of assisted checkout, inventory management, and food operations by using the existing store cameras.

“If someone picks up two slices of pizza and puts them in the same box, or eats one while shopping, the system may recognize the customer and correctly charge for two slices. It can also notify staff, allowing a non-confrontational loss prevention strategy. They may also want to put another pizza in the oven for the lunch rush,” Dengi says.

“When it’s implemented the right way, #ComputerVision allows employees to help when necessary, without creating significant overhead” — Aykut Dengi, @RadiusAI via @insightdottech

Preserving the Social Experience

While customers appreciate speedy transactions, they also value customer service and human interaction—elements that are often missing at autonomous self-checkout. “People don’t go to the store just to buy things, they chat with the employees. It’s a social experience,” Dengi says. Many self-checkout systems are often located away from store staff. This leads to increased loss prevention, slower transaction times, and customer satisfaction issues.

In the future, retail technology itself may become more personalized. For example, RadiusAI is working with CPG companies to create on-the-spot generative AI promotions based on a customer’s purchases. “The best technology is invisible to customers and employees,” Dengi says. “When it’s implemented the right way, computer vision allows employees to help when necessary, without creating significant overhead.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

AI in Restaurants Helps Lower Costs and Grow Sales

From dine-in restaurants to quick-serve takeout and workplace cafeterias, customers expect not just great-tasting food but fast and convenient experiences, too. At the same time, the food service industry is looking for more efficient operations, accurate inventory management, and a boost in profitability.

To stay ahead of the game, large food retailers rely on transformational technologies like edge AI and computer vision. Platforms powered by these technologies enable retailers to transition from time-consuming conventional checkout to the efficient self-service their customers want. Plus, they provide valuable data that helps understand customer preferences, minimize wasted resources, stock shortages, and suboptimal performance.

Shanghai Kaijing Information Technology Co. Ltd., a leading IT solutions provider, developed its AI-Powered Automated Checkout Services to overcome food service challenges, meet new customer demands, and create new opportunities.

To stay ahead of the game, large food #retailers rely on transformational technologies like #EdgeAI and #ComputerVision. Shanghai Kaijing Information Technology Co. Ltd. via @insightdottech

Food Service Retail Transformation at Work

Take, for example, LaoXiang Chicken Catering Co., a quick-service restaurant (QSR) chain with a network of more than 2,000 locations across China. The restaurant’s goal was to overcome a series of challenges such as tracking performance and maximizing resources across all store locations by:

Managing operations at scale: A holistic understanding of operations across all stores without diminishing service quality was the top priority. With a plan for significant expansion—to a count of 10,000 stores—the company needed to make sure the entire operations continued to run smoothly even with this monumental growth.

Increasing profitability: QSRs typically run on low profit margins due to high food and rent costs. But their largest expenditure is on staff, which can account for 30% of monthly costs. And with a growing number of stores, food waste was eating into profit margins.

Improving the customer experience: One of the challenges the restaurant faced was maintaining consistent performance across the chain—and a rapid expansion increases the problem. Regardless of where customers dine, LaoXiang Chicken needed to guarantee a uniform and positive experience across every store. To maintain a positive brand reputation, the company needed to overcome issues like slow service, long lines, inconsistent food quality, and unhygienic areas.

Enhancing operational transparency: The company wanted more visibility into store results to identify the best and lowest performers. This information would allow management to pinpoint which factors contributed to the results and implement corrective actions. The roadblock was they were using outdated and inefficient manual methods to do so—making for a nearly impossible task to perform at scale.

It became clear that LaoXiang Chicken required the Shanghai Kaijing Canteen solution, an AI-Powered Automated Checkout Services end-to-end digital retail platform. The platform offers AI and computer vision-enabled function modules for applications such as product recognition and weight measurement, pricing, facial recognition, payments, data analysis, comprehensive system management, and more.

Because food service locations have different physical layouts and products, the POS stations come in three form factors—desktop all-in-one, counter-style checkout, and vertical checkout—giving food retailers like LaoXiang Chicken the flexibility needed to accommodate every store (Figure 1).

Picture of three Canteen checkout stations
Figure 1. Automated checkout systems come in three models: desktop all-in-one, counter-style checkout, and vertical checkout POS. (Source: Shanghai Kaijing)

AI in Restaurants Deliver Measurable Results

Working with Shanghai Kaijing, LaoXiang Chicken achieved significant results, which included the ability to:

  • Reduce the need for manual checkout tasks and customer checkout time via real-time, product SKU identification, and transaction bill generation.
  • Predict peak dining times by analyzing customer flow patterns, enabling managers to understand traffic trends, proactively prepare for customer surges or downtimes, and make informed operational and resource adjustments.
  • Minimize food waste with effective inventory management, ensuring stores optimize stock levels based on accurate consumption pattern forecasts.
  • Improve standard operating procedures related to food quality, kitchen hygiene, and adherence to safety regulations by leveraging AI and computer vision AI-driven analytics.
  • Empower management with a high-level overview of performance metrics across store locations, identifying weak areas of operations so necessary adjustments can be prioritized and promptly addressed.

“Overall, our platform enabled LaoXiang Chicken to implement strategic decision-making, which contributed to revenue growth and improved customer experience,” says Zhengting He, CTO of Shanghai Kaijing. Each store achieved +/- $450 reduction per month in labor costs and up to 80% improvement in checkout efficiency—eliminating customer loss due to long lines. In addition, the company saw a 10% sales increase per day leading to an annual +/- $38,700 profit increase in each store.

Tech and Tools Power AI in Restaurants

To enhance Canteen’s platform performance, the company turned to Intel technologies. Intel® Core processors and edge AI technology provide the performance needed for near real-time SKU identification with a 99% accuracy rate. “Our testing shows that this level of performance and accuracy facilitates an average checkout time of three seconds,” says Zhengting He. And with its advanced computer vision capabilities, the Intel® OpenVINO toolkit optimizes inferencing performance.

The Intel® oneAPI Video Processing Library also plays an important role in Canteen’s video analytics capabilities. For example, the advanced hardware and software capabilities on Intel® GPUs allow AI-driven quality and compliance checks to run off-hours.

Shanghai Kaijing goes beyond delivering advanced products like Canteen by providing other services, including customized consulting, product lifecycle support, CRM, and data analytics to help optimize supply chains and improve operational efficiency.

The company provides services that cater to diverse client needs, ensuring their sustainable growth strategies and adherence to industry standards. “Our leading customers are expanding rapidly, and we believe such trends will continue,” says John Yang, Shanghai Kaijing CMO. “We are excited to help companies like LaoXiang Chicken to continue working toward its goal of providing quality and affordable food to people all over the world.”

 

Edited by Christina Cardoza, Associate Editorial Director for insight.tech.