Enabling Industrial Automation at the Edge

Industrial applications are growing more sophisticated, thanks to recent advancements in artificial intelligence. “Since the rise of ChatGPT, we’ve seen more and more use cases reliant on generative AI—from autonomous robots and augmented reality to smart cameras,” explains Steven Shi, Senior Product Sales Manager of Boards at AAEON, a leading provider of intelligent IoT solutions.

But increased reliance on AI comes with its own set of challenges. For starters, AI-powered solutions deployed for real-time operations must perform across multiple systems with tight latencies that the cloud is not equipped to handle. As a result, more and more data processing is shifting to the edge.

But existing edge hardware can’t provide performance and throughput necessary for industrial AI automation operations—requiring organizations to adopt new edge hardware.

Anatomy of Industrial Edge-First Hardware Design

To meet the demands of today’s industrial applications, industrial edge systems must deliver exceptional performance in a small form factor, operate in harsh environments with low power consumption, and support Time-Sensitive Networking (TSN).

TSN is an industry standard that enables deterministic communication over Ethernet networks, enabling precise real-time coordination among far-flung systems. This is especially important in industrial automation environments where accurate timing is crucial.

To meet the demands of today’s industrial applications, industrial #edge systems must deliver exceptional performance in a small form factor, operate in harsh environments with low power consumption, and support TSN. @AAEON via @insightdottech

Thankfully, companies like AAEON that specialize in hardware for industrial automation have many years of experience delivering high-performance capabilities alongside efficiency and thermal robustness. AAEON’s COM-RAPC6 and NanoCOM-RAP computer-on-modules, for example, keep with this trend.

“Because we’re talking about the edge, sometimes space is also a challenge,” Shi explains. “That’s why AAEON put so much focus on compact designs like the COM Express Mini.”

Designed according to the COM Express standard, both systems feature a compact form factor with considerable power efficiency. NanoCOM-RAP also provide wide voltage input, allowing them to manage power fluctuations more effectively.

Each module also features the 13th Gen Intel® Core processor. Built to deliver energy-efficient, optimal performance for edge use cases, these processors feature flexible hybrid architecture with support for hardware-enabled AI acceleration, multitasking, and concurrent workloads.

AAEON has also designed both of its modules for rugged environments and tested them through a unique process known as Wide Temperature Assurance Service (WiTAS).

“As with some AAEON boards and modules, COM-RAPC6 and NanoCOM-RAP are WiTAS qualified,” says Shi. “We have put them through a very strict quality control process to guarantee they can operate in a temperature range from 40°C to 85°C.”

The COM-RAPC6 and NanoCOM-RAP offer 2.5 Gigabit Ethernet, enabling support for TSN. Both COM-RAPC6 and NanoCOM-RAP include a discrete TPM for additional security. Additionally, they are equipped with high-speed PCIe interfaces with support for PCIe expansion through a carrier board. This allows for scaling AI performance by accommodating add-ons like AI accelerator cards via PCIe interface on the carrier board.

AAEON also offers Q-Service, a technical service program in which it leverages its engineering expertise to help clients bring products to market much faster. This includes assisting with both design and debugging, as well as providing software support and BIOS customization. Last, the company provides a user-friendly interface for UI development and device monitoring in AAEON Hi-Safe.

Building a Smarter Industrial Edge

AAEON is already creating a new product line that will take advantage of the 14th Gen Intel® Core Ultra processors and provide many more benefits to embedded and industrial manufacturers. The 14th Gen processors deliver even better power efficiency than their predecessors and include both advanced GPU and an embedded Neural Processing Unit (NPU) for AI acceleration as well as support for high-speed WiFi 6E.

“Systems built around these processors will be able to better handle the environmental challenges and resource requirements at the edge. I strongly believe that this will open new opportunities and possibilities for what edge hardware can achieve,” says Shi.

According to David Huang, Product Manager at AAEON, the most significant feature of the new processors is the embedded NPU. “Moving forward, I think AI-enabled hardware will eventually be as ubiquitous as cell phones or calculators. The embedded Wi-Fi 6E capability will also be very beneficial for our designs over the next three to five years,” he explains.

Looking Toward the Future of Industrial Automation

In the future, AAEON expects that demand for edge AI will only continue to grow. Hardware that features on-board AI acceleration will become increasingly important amid mounting data processing requirements. AAEON, for its part, is more than ready.

“We foresaw that IoT was coming, and that there would be an age after that defined by artificial intelligence,” explains Huang. “In light of that, starting from 2016, our focus has been on creating high-performance embedded products with a small form factor. By doing this, we’ve enabled our customers to perform the necessary edge processing to support applications such as computer vision and autonomous mobile robots—and we plan to continue down this road.”

For industrial organizations looking to solve the challenges of AI-driven edge automation, COM Express module such as the COM-RAPC6 and NanoCOM-RAP provide the necessary performance, power efficiency, and network throughput. Deploying such hardware with assistance from vendors like AAEON can help businesses ensure they’re ready to make the most of what AI has to offer, both now and in the future.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Checkout with AI for Faster Service and Fewer Losses

Has your retail store considered switching to self-service kiosks? Are you worried about potential challenges such as “unexpected item in bagging area,” accurately identifying produce, or ensuring a smooth checkout process without constant staff intervention?

These are common obstacles on the path to digital transformation that both retailers and consumers face. But intelligent retail technology has the potential to significantly enhance the customer experience and streamline operations. Improving the employee experience and retaining skilled staff are additional benefits that can greatly impact your bottom line.

Matt Redwood, Vice President of Retail Technology at Diebold Nixdorf, a retail technology company, guides us through the landscape of retail technology. He discusses AI solutions for common retail inefficiencies, the importance of purposeful innovation, and the value of leveraging technology partners throughout the transformation journey (Video 1).

Video 1. Matt Redwood, VP of Retail Technology at Diebold Nixdorf, discusses how AI is transforming in-store retail operations and experiences. (Source: insight.tech)

What are some top challenges retailers face today?

Most retailers are struggling with the same challenges. And making sure that the in-store experience is as good as possible for their customers is a key one. Post-Covid, retailers are really investing very heavily in that again. But they’re also being squeezed both on the top line and on the bottom line—the cost of goods is up, the cost of freight, the cost of managing and running stores—and they have to find ways of driving efficiencies in the store while also delivering that great consumer experience. It’s a real balance between getting the economics of retail right and satisfying the needs of the consumer.

And competition is as high as it’s probably ever been in retail, which is good in certain aspects. It helps with pricing and keeping inflation under control, but on the flip side, if consumers have a bad experience in a store, it’s easy for them to flip to another brand.

How is AI being used to address some of those challenges?

Generative AI really took off in retail in 2023, and certainly some companies really rushed to an AI endgame, with this euphoric view that AI could replace all the existing technology within stores. I think sometimes we forget that although the technology may exist—forget whether it’s commercially viable or practical to deploy it—you have to have consumer adoption. If you don’t have consumer adoption, the technology is worthless. That’s what I call the hype curve.

What a lot of retailers are now doing instead is focusing in on their pain points with what we call point-solution AI technology, that is, specific AI deployed for a specific use case to solve a specific problem—technologies like facial recognition for age verification. For example, if you’re trying to buy a bottle of wine, you have to wait for a member of staff to approve your ID. And that wait is compounded by the fact that retailers are struggling to find staff. Using AI in that environment drives greater efficiency, it reduces that requirement on members of staff, and it boosts that consumer experience.

Another big one is anti-shrink technology and using AI to make it more difficult for those who are trying to steal. But it can also help when someone may have just been unfamiliar with a process or have genuinely made a mistake—making sure that that is being caught without making it a bad experience for that particular customer.

We’re also starting to see AI applied on top of existing technologies to make them more efficient, to make them easier to use, to close loopholes, and to boost the consumer experience. One example is in-store safety—using AI on top of CCTV networks to make sure fire exits aren’t blocked, say. Or heat mapping to understand the flow of consumers around stores—making that flow easier but also potentially commercializing that flow.

What is the best way for stores to implement AI?

The “build it and they will come” mentality does not work with retail technology. We track the consumer-adoption curve and we track the technology-development curve, and it’s important to find something broadly in the middle.

We always recommend starting with data. It’s very easy to be swamped by it—we call it paralysis by analysis. But if you can segment your data, it can provide a lot of insights. You can really analyze and understand: How is the store operating? Where is the friction within the staff journey? Within the consumer journey? You can then quantify the effect that that friction has. It builds the picture to say, “Okay, I’ve got a problem statement that I want to solve. It’s having this impact on consumers and staff, and this is the impact to my business.” And that’s relatively easy to calculate.

We are starting to see #AI applied on top of existing technologies to make them more efficient, to make them easier to use, to close loopholes, and to boost the consumer experience. @DieboldNixdorf via @insightdottech

The more problematic piece is then finding the right innovations to deploy in the store to solve for that issue. But starting with that data highlights where the biggest areas of inefficiency are and then provides the compass to point you in the direction of the right technology. It’s also then very easy to actually measure how successful that technology has been once it’s been put into the store.

Tell us more about matching the right technology to a specific problem.

At Diebold Nixdorf, we’ve really focused on three core solutions where the biggest friction points are. One is age verification, which I mentioned before. Facial recognition provides a much better experience for the consumer. It’s faster, and faster transactions mean that consumers are moving through the front-end quicker. That means fewer queues, and queuing is consumers’ biggest checkout bugbear. So we remove two of the biggest friction points associated with checkout with one piece of technology.

There are also technologies centered around the product, such as efficient item recognition at checkout—particularly in grocery for fresh fruit and vegetables. That is the second solution. And it’s not just for non-barcoded items like produce. In some environments, particularly in smaller stores, why should you have to scan the barcode when you could identify the item by its image?

And then, finally, shrink. Of course, the natural argument is that self-service is a natural place for shrink because it’s unmanned in a lot of environments. But with those who are maliciously trying to steal, even if we close all of the loopholes at self-service, they will find somewhere else in the store to steal from. We’ve really focused our AI efforts there on behavioral tracking. Once you can start to identify behavior, it doesn’t matter where within the store you deploy the technology. Of course, we focus on the front-end first: self-service checkouts and POS lanes. But then we run that same solution onto the CCTV network, and then we can identify shrink anywhere in the store.

Where does the human element come into play?

The human element is really, really important to self-service, and it’s quite often overlooked. Self-service is more about staff redistribution. Attracting and retaining staff is a big problem for retailers, so they have to use their staff wisely. And where self-service is playing a major role is in unlocking members of staff to interact with consumers where those consumers need the most help—finding an item, asking a question about it, navigating the store—places where it really makes sense to deliver that consumer experience. During Covid, retailers that had self-service had much greater flexibility of operations within their stores; post-Covid, self-service actually allows them to boost the level of consumer experience where it really counts.

Let’s go back to the challenge of preventing shrinkage. So, it’s relatively easy to identify if someone has stolen. What you then do in that scenario is more difficult. If someone is stealing maliciously, you don’t want to put your staff in danger or in an environment that they don’t feel comfortable with. You also don’t want to alienate or embarrass someone who has genuinely made a mistake. So we are very much putting the human element into the situation here; the situation will be dealt with differently depending on the use case of the theft.

If there’s an instance of shrink, an alert is sent to a member of staff. All the information is put in that person’s hands so that they can deal with the situation in the way that they see as appropriate. And staff training comes into play here. We have a number of great partners that work on staff training to give employees the toolkit they need. Then, when they approach that member of the public—and they’re approaching them knowing exactly what’s happened—they’re trained to deal with that situation in the most agreeable way possible. So the technology is only one-third of the actual solution; the human element is a massive part of it that shouldn’t be overlooked.

How is Diebold Nixdorf solving customers retail challenges?

As a solution provider that retailers work with to build out their technology—not just across checkout but all the way across the store—we quickly realized that it was unrealistic to think that we could have 20 or 30 different solutions—all in the AI space, all providing different use cases, but none of them talking together. So we work with a third party that has a very mature AI platform, and that becomes the backbone for anything the retailer wants to do within their store from an AI perspective.

We are the trusted partner, the integration partner. We will provide applications that can sit on top of that platform—like age verification, shrink reduction, item recognition, process or people tracking. But if there is a particular partner out there that is market leading in, say, health and safety, we can plug them on top of the platform, too. It doesn’t make sense for us to reinvent the wheel.

And what that means is that the retailer can build this ecosystem of AI partners, all plugged into a single platform, and the solution is very, very scalable. It will ultimately move us towards what we call intelligent store. It isn’t necessarily removing the physical touchpoints or removing the existing technology; it’s about providing intelligence to retailers.

Every device in the store is effectively a data-capture device—a shelf edge camera or a self-service checkout or a scanner—these are all data inputs. And that’s a two-way street: You can push data down, you can pull data back. The AI platform allows you to connect all of these together to create that intelligent store.

It does mean that there’s a huge amount of data available, but I think the retailers that are really going to advance quickly are the ones that work out what to do with it. Because it can and it should inform every single decision or direction that a retailer takes—how products are priced, where they are positioned within the stores, how stores are staffed.

What is the value of technology partnerships in making AI retail solutions happen?

We work very, very closely with Intel—not just on the AI topic but for our core platform itself. And not just on the solutions that we deploy into stores today but also on our development roadmap. And we follow the developments at Intel closely, too—where Intel is going with its solutions and how we can better integrate those into our solutions.

We work particularly closely with Intel on some of the scalable platforms. Retailers have technology requirements today, but—particularly with these AI topics—the amount of computing power that will be required in three or five or seven years will be very, very different from the requirements now. So providing retailers the ability to scale their technology to meet their future requirements is an absolute game changer.

Any final thoughts for those looking to incorporate AI in retail?

I would say, start with the data. Identify the business requirements or problems that you are looking to solve and then find the right provider that’s going to enable you to deliver against those requirements today, but that is also going to give you that longevity of scalability. It is a marriage, and you have to make sure that you’ve made the right choice.

Related Content

To learn more about AI in retail, listen to AI in Retail: Stop Shrinkage and Streamline Checkout and read New Retail POS Solutions Transform the Checkout Journey. For the latest innovations from Diebold Nixdorf, follow them on Twitter/X at @DieboldNixdorf and on LinkedIn.

 

This article was edited by Erin Noble, copy editor.

Securing the Edge with Hyperconverged Infrastructure and AI

Expansion of distributed infrastructure fundamentally transforms the cybersecurity landscape. Data generation and processing increasingly shift toward the edge. As a result, traditional centralized security measures are inadequate due to escalating complexities and scale of emerging threats.

To address these evolving demands, hyperconverged infrastructure is extending beyond traditional data center confines. This extension necessitates adoption of hardware that delivers data-center-class performance while enduring environmental challenges of edge locations.

“The dynamic and varied nature of edge environments requires a new approach to security, one that is more adaptive and intelligent,” explains Stéphane Duburre, Product Line Manager at Kontron, a leader in embedded computing technology. He points to the latest Intel® Xeon® processors and Intel® Arc GPUs as examples. “These advanced processors enable real-time edge AI security analytics, which are crucial for data protection and operational continuity in harsh edge environments.” 

Further complicating the network edge landscape, communication within industrial environments is transitioning to Time-Sensitive Networking (TSN), which supports deterministic messaging on standard Ethernet networks. This advancement facilitates seamless integration of OT and IT networks. But it also expands the attack surface for security threats, requiring a more sophisticated and robust security approach.

Adapting to New Edge AI Security Needs

To address these evolving demands, Kontron developed the ME1310, a high-performance multi-edge platform. Where harsh environments would cause other equipment to fail, the ME1310 exceeds thanks to a 22-core Intel Xeon processor rated for temperatures of -40°C to 65°C. “It sustains performance even under fluctuating or extreme conditions,” Duburre notes.

When more performance is needed, the ME1310 can accommodate two PCIe Gen 4 accelerators, including Intel Arc GPUs for AI acceleration. This adaptability allows for significant enhancements in processing power and speed—critical for applications requiring intensive computation and real-time data processing.

“The dynamic and varied nature of #edge environments requires a new approach to #security, one that is more adaptive and intelligent,” – Stéphane Duburre. @Kontron via @insightdottech

In applications that need high-bandwidth packet processing, the platform’s integrated hardware delivers up to 200 Gigabit Ethernets of HAL2 and HAL3 switching. With support for Precision Time Protocol (PTP) for TSN networks, the ME1310 facilitates data transfers across deterministic networks—maintaining security across increasingly integrated OT and IT environments.

By addressing these challenges, the ME1310 provides a compact, versatile solution that brings data center-level capabilities to the network edge, enabling organizations to navigate the complexities of modern network environments with enhanced operational security and efficiency. 

The Role of AI at the Network Edge

Hyperconverged platforms like the ME1310 lay the foundation for the transformative role of edge AI security. With its ability to learn from and adapt to network activities in real time, AI enables a new dynamic of immediate, autonomous responses to emerging threats. By continually analyzing data, AI significantly improves both the understanding and mitigation of evolving threat behaviors, thereby strengthening overall security measures, according to Duburre.

For AI to be most effective, it must be deployed directly at the network edge. This reduces latency significantly and decreases reliance on centralized data centers, which is vital for timely decision-making in environments where security is critical.

But deploying AI at the network edge introduces unique cybersecurity challenges that differ from traditional data center environments. These include heightened concerns over data privacy, increased vulnerabilities in security devices and network infrastructure, and the complexity of managing security protocols across dispersed and varied edge locations.

But “the integration of Intel Arc GPUs with Intel Xeon D processors enables robust edge AI security capabilities,” explains Duburre. This allows for advanced data analytics and real-time encryption and decryption at the edge.

In manufacturing environments, for example, the ME1310 can use AI to detect and respond to operational anomalies. Duburre elaborates, “Such capabilities allow for the immediate analysis of unexpected stoppages or irregular machine behavior to determine their cause—be it a potential cyberattack or a mechanical failure.”

The Future of Edge AI Security

Looking ahead, the role of hyperconverged platforms like the ME1310 in edge computing is poised to expand significantly. As more organizations leverage IoT and other advanced technologies, demand for localized, powerful computing solutions will continue to rise. Hyperconverged platforms are uniquely positioned to meet these demands, offering compact, versatile solutions that bring data center-level capabilities to the network edge.

For industry professionals navigating the complexities of modern network environments, platforms like the ME1310 can significantly enhance operational security and efficiency. By adopting these sophisticated solutions, businesses can ensure they remain at the cutting edge of technology, prepared to face the challenges of tomorrow’s digital landscapes with confidence and resilience.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

AI Technology Brings Gold-Medal Event Experiences

The focus of the 2024 Olympic and Paralympic Games in Paris is, of course, on the athletes and their excellence and dedication. It’s a rare opportunity for those of us who maybe sprint only to catch a bus or an elevator to track the athletes as they push the boundaries of the human body and spirit. A small number of fortunate spectators will be in France this summer to witness the spectacle in person; the rest of us will follow along at home on our electronic devices.

So, whose focus is on making that spectator event experience a smooth and fulfilling one—both abroad and at home? One of them is Sarah Vickers, Head of the Olympic and Paralympic Program at Intel, who will walk us through the process of leveling up event experiences through technology—the Olympics and Paralympics in particular. She’ll cover Intel’s involvement behind the scenes before, during, and after the Games; the crucial role of data in making decisions on the ground there; and how the experiences of Paris 2024 can inform not only the 2028 Games in Los Angeles but other kinds of entertainment events as well (Video 1).

Video 1. Sarah Vickers, Head of the Olympic and Paralympic Program at Intel, explores the latest technology powering the Games. (Source: insight.tech)

Can you give us an overview of Intel’s involvement in the Games?

It is the largest sporting event—and the most complex sporting event—on Earth, and it has billions of watchers around the world. So it’s a really exciting opportunity for us to demonstrate the leadership of Intel technology in a scalable way.

We think about integrating Intel technology to help with the success of the Games in a variety of ways. There are the really complicated operations involved in delivering the Games—moving athletes and fans and volunteers around, getting people from point A to point B. That’s complex in itself, but doing that over 17 days across so many sports is even another level of complexity.

There’s also the fan experience beyond the operational ease of getting around, which is more about the time they spend outside of the Games. The sports themselves provide great entertainment, but then there’s all that in-between time. What can we do to help that experience be even better for the fans?

Then there’s the broadcast experience for the billions of people watching at home, which has become more involved when you think about all the different ways people consume media now. So, we work with Olympic Broadcasting Services to deliver outstanding experiences based on Intel technology and applications.

How is that Intel technology being used behind the scenes for Paris 2024?

We start working with the International Olympic Committee—the IOC—and with the International Paralympic Committee and with the specific organizing committee—in this case, Paris 2024—years in advance. We need to really understand what we are trying to solve. Also, how can we take what we’ve done in the past and make it better? So, we’ve taken solutions that emerged from Tokyo in 2020 and improved them. And then, what are the new challenges that have evolved since the last Games?

The Games are also a really excellent grounds to demonstrate the whole Intel idea of “AI Everywhere,” where Intel AI platforms have the opportunity to change a lot of things. One good example is digital twinning, as in having a digital twin of all the event venues to understand in a 3D way what they are going to be like during the Games.

If you think about broadcasters, they really need to understand where camera placement is going to be and how that’s impacted by different things. If you think about the transition from the Olympic Games to the Paralympic Games, there are a lot of changes that need to happen for accessibility for the athletes and things like that. Digital twins make it possible to do those things in advance, rather than doing it as it happens and realizing that certain solutions don’t actually work. There’s also some reduction in travel, because you can work with a digital twin on your PC from anywhere.

“The Games are also a really excellent grounds to demonstrate the whole Intel idea of #AIEverywhere, where Intel #AI platforms have the opportunity to change a lot of things” – Sarah Vickers, @Intel via @insightdottech

Another use case that we’re helping out with from an operational perspective is just understanding the data. There are a lot of people behind the scenes at the Games—the media that’s on the ground, all the workforce—so we’re helping the IOC and Paris 2024 understand the people-movement factor to optimize facilities for them. That could be about having the right occupancy levels in a venue, about making sure people have the right entries and exits, and really using that data to make real-time decisions. That will also help inform the next Games, because those teams will have a base set of data to help them model and plan for the complicated situations involved there.

One final example is on the athletes’ side. This is the athletes’ moment; for some of them it’s the highest moment in their careers. So, what you want to do is make it as uncomplicated for them as possible, so they can focus on their performance and not think about the things that enable them to get to that performance—food, travel, and accommodations.

So, for these Games we’re implementing a chatbot based on Intel AI technology. It’s going to enable athletes to ask questions and get conversational answers about day-to-day things—like food, travel, and accommodations. And that chatbot will continue to get smarter as we get more answers and understand what’s working. I think it’s really going to be a game changer for athletes in Paris.

Walk us through the process of your involvement with the Olympics and Paralympics.

The first thing we say is: “What needs to be delivered? What are we trying to solve? What are we worried about?” There’s a set of expectations for every Games, but then there’s also that set of expectations for what we want to do that’s different from the last time.

And then we do an assessment and ask, “How can Intel technology help?” We work very closely with a number of partners to try to figure out that question. And then we develop a roadmap of solutions. Some of those solutions are delivered in advance: digital twinning, for example. The benefit of digital twinning is not during the Games; the benefit is really months before the Games. Then there are other solutions that obviously are for during the Games. Hopefully, during the Games themselves, everything goes smoothly, and we can just enjoy it and watch our technology shine. But, of course, we have staff on-site to make sure that everything goes off without a hitch.

What about after the Games, what happens next?

There’s so much data involved with the Games, right? There’s all this content—broadcast data, all the highlights. Then there’s all the data that we’re helping the IOC collect in order to understand people movement and things like that. And that data is definitely being used to create models to help plan the next set of Games, as I mentioned before, as well as other kinds of entertainment.

One of the really interesting post-Games use cases we’re working on is with Olympic Broadcasting Services around using AI platforms for highlights of the Games. We’ll be able to create highlights that just weren’t possible before, because they were all generated by people and a limited number of people.

But if you think about how we consume broadcast these days, we are much more demanding in our expectations; we want things that are a little more personalized now. And there are 206 different countries participating in the Games—multiple languages, multiple sports. Some of the bigger countries have traditionally dominated the highlights space, and certain sports are really important in those countries and others aren’t important at all.

So, what the AI highlights will be able to do is generate highlights that are really customized to the people in the places that are viewing them. The models will also learn over time and get smarter, and then the fans are going to get even better and more personalized highlights.

Can the Intel technology that’s used for the Olympics be applied to other sectors?

Almost every application that Intel has for these Games, there’s a use for it at other events but also beyond sport. The way we think about it is: “How does this demonstrate what we can do?” And then: “How does it scale?”

One example is a really fun application of AI platforms called AI Talent Identification. It uses AI to do biomechanical analysis to help fans understand which Olympic sport they’re most aligned with. The fan does a bunch of fun exercises, Intel mashes up that data, and then they get the result. But if you think about what that biomechanical analysis can do, this application can be used in a variety of ways to improve people’s lifestyles—physiotherapy, occupational health. And think about digital twinning: you’re seeing a lot of that in manufacturing, in cities. It depends on the goals, but these types of technologies can definitely benefit many outcomes.

What is the value of Intel’s partnerships and ecosystem during the event?

The Games are going to be a massive event, and in this post-pandemic era I think we’re all excited to see them come back into their glory. It’s very exciting, but it’s obviously very complex. Paris is a giant, complicated city without an Olympic Games or Paralympic Games, and so bringing it all together is going to be hard.

Also, AI—and technology in general—has gotten smarter and become more mainstream, and that has affected what the expectations are around it. But we can use all the data that’s generated by it to build complex and interesting models—the compute is possible now—and there are going to be a lot of different AI applications that Intel will facilitate throughout the Games.

But Intel doesn’t do things alone; strong partnerships are crucial. We really try to understand what the best solution is, and then we work with the appropriate ecosystem to help deliver it—other top Olympic partners as well as partners at the local level. Working with those trusted partners, Intel can help develop the solutions to deliver an amazing Games. We’re really excited to be a partner of the International Olympic Committee and the International Paralympic Committee to help make these Games the best yet.

Related Content

To learn more about technology powering event experiences, listen to Game-Changing Tech Takes Event Experience to the Next Level. For the latest innovations from Intel, follow them on Twitter at @Intel and on LinkedIn.

 

This article was edited by Erin Noble, copy editor.

Virtualization Opens Doors for Physical Security ISVs

The physical security market is booming, with customers eager to adopt AI, computer vision, and other emerging technologies. This gives systems integrators and independent software vendors (ISVs) an unprecedented opportunity to enter the market and distinguish themselves with software offerings.

Hyperconvergence makes this opportunity even more inviting. Rather than relying on multiple, separate hardware components, hyperconverged architecture consolidates virtual computing, network virtualization, and software-defined storage into a single integrated system. These systems are more robust, easier to manage and deploy, more cost-effective, and less energy-intensive.

Consequently, the market is moving away from the single-purpose hardware of specialized surveillance systems and toward standard Intel® processor-based servers and appliances that can run multiple virtualized workloads. This shift brings hardware costs down and boosts the value of software.

The challenge for system integrators and software providers is to take advantage of these technology and market dynamics.

“Most big video surveillance solution providers bundle their software and hardware,” explains Tom Larson, President at Velasea LLC, a system builder specializing in hardware and computer vision. “This limits the opportunities to add value with additional software. Investing in hyperconverged hardware tends to be similarly unappealing.”

“Many companies involved with AI and computer vision don’t want hardware on the books,” Larson says. “That’s why we created a virtual OEM program that allows software experts to stay out of the hardware game.”

Opening Up the Physical Security Market

Originally founded as a spinoff of an IT distribution company, Velasea has evolved into a full-service technology aggregator that specializes in integrating multiple systems and architectures into a single appliance.

“Our goal is to help software companies enter the physical security market,” says Jimmy Whalen, CEO of Velasea. “Our appliances enable them to focus on software rather than hardware while ensuring those appliances are easy for their customers to deploy and upgrade.”

As part of this philosophy, Velasea works closely with its technology partners to streamline delivery of hyperconverged systems.

“There are challenges with virtualization and new architectures in the physical security space that Velasea is uniquely qualified to address,” Larson explains. “One is hardware consolidation, which happened a decade ago in IT but is still in the early stages in physical security. This can present challenges for security integrators who don’t have our background in IT infrastructure.”

Velasea builds appliances to de-risk projects. End users get something that works, and ISVs get an appliance with well-understood performance. More important, that appliance combines everything into a single hyperconverged system—so businesses can gain all the benefits of hyperconvergence without needing to think about underlying complexities.

Gaining easy access to hyperconverged systems is a boon for companies looking to expand into the surveillance space. Virtualization gives them the flexibility to test, develop, and roll out new features and solutions rapidly, responding to market demands with agility. What’s more, virtualization unlocks new levels of scalability and efficiency, enabling software companies to integrate cutting-edge technologies into their solutions more effectively.

Gaining easy access to hyperconverged systems is a boon for companies looking to expand into the #surveillance space. @velaseasystems via @insightdottech

New Path to Hardware Virtualization

Velasea collaborates with partners such as Quantum—a company that specializes in video and unstructured data—to bring the Quantum Unified Surveillance Platform (USP) to market. The USP solution consolidates compute, storage, and networking resources into a single virtualized solution capable of hosting not just video management systems but also a range of other security applications.

Supported by a subscription-based licensing model, Quantum USP can run on any hardware that incorporates Intel® Virtualization Technology (Intel® VT), which allows multiple workloads to operate simultaneously on a single shared hardware resource. This hardware-agnostic approach not only provides security integrators with unmatched flexibility in terms of infrastructure and architecture but also greatly reduces complexity and total cost of ownership.

Leveraging the Power of Partnerships at the Edge—and Beyond

Velasea is exploring new use cases around edge computing. For example, Velasea recently helped an OEM develop a Power over Ethernet (PoE) switch based on the 12th generation Intel® Core processor that incorporates both AI and a Video Management System (VMS). By consolidating these functions onto a single hyperconverged platform, Velasea helped the company gain a competitive advantage with a more capable and efficient solution.

Alongside smarter appliances, collaboration like the one between Velasea and Quantum can support applications well beyond video surveillance—and even outside the bounds of physical security. In addition to broadcasting, some potential markets identified by Velasea include retail, logistics, and public safety. That, according to Larson, is only the beginning.

“There’s a new generation of software emerging that is changing the game, and it’s going to change rapidly,” says Larson. “People are writing better code and utilizing systems better, and the result is that we’re seeing the entire landscape of physical security evolve. We partner with Intel, integrators, and software companies to be part of that evolution, developing optimized solutions to help businesses solve ‘last mile’ problems faster.”

“Our mission is to be a trusted partner for ISVs, providing them with the solutions and expertise necessary to support their customers,” he concludes. “It’s our partnerships that make this possible.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

AI in Retail: Stop Shrinkage and Streamline Checkout

The retail landscape is riddled with challenges—staffing shortages, supply chain disruptions, and inflation. But there’s a solution with a powerful ROI: AI in retail.

Imagine self-checkout with flawless image recognition, streamlined transactions, and reduced labor costs. AI can also free employees up so they can provide more meaningful customer interactions and boost customer satisfaction across the store.

This podcast dives deep into how AI transforms retail operations and enhances the in-store experience. Discover how AI in retail improves efficiency, cuts costs, and strengthens customer loyalty.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guest: Diebold Nixdorf

Our guest this episode is Matt Redwood, Vice President of Retail Technology at Diebold Nixdorf, a financial and retail technology company. Matt is a strategic business and transformational retail technology leader. At Diebold, he is responsible for ensuring top retailers can access high-quality hardware, software, and services.

Podcast Topics

Matt answers our questions about:

  • 1:59 – Different challenges retailers face today
  • 4:27 – Real AI benefits for in-store experiences
  • 9:39 – Where retailers can start implementing AI
  • 14:46 – The human element in AI transformations
  • 23:16 – Real-world customer use cases
  • 28:46 – Technology partnerships making AI in retail possible
  • 31:11 – Final thoughts and key takeaways

Related Content

To learn more about AI in retail, read New Retail POS Solutions Transform the Checkout Journey. For the latest innovations from Diebold Nixdorf, follow them on Twitter/X at @DieboldNixdorf and on LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to “insight.tech Talk,” formerly known as “IoT Chat,” but with the same high-quality conversations around IoT, technology trends, and the latest innovations you’ve come to know. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today I’m joined by Matt Redwood, Vice President of Retail Technology at Diebold Nixdorf. Hey, Matt, thanks for joining us.

Matt Redwood: Hi, Christina. It is great to be here speaking with you today.

Christina Cardoza: So, for those of our listeners who are not familiar with Diebold, what can you tell us about the company and what you do there?

Matt Redwood: Diebold Nixdorf is a technology company of two halves. We provide banking systems to the world’s largest banks, and we provide retail technology to the world’s largest retailers. I’m responsible for retail technology. We provide hardware, software, and services to most of the 25 top retailers globally, as well as quite a few tier-two, tier-three retailers.

And we generally cover front-end technology, which we’ll go into more detail on, software and enterprise software. And then we provide most of the services—break/fix and help desk services—to retailers to make sure all their technology is up and running for the maximum time possible.

Christina Cardoza: Great. And obviously we’ll be focusing on the retail aspect of Diebold Nixdorf today. We’ll have to get someone else on a later podcast to talk about the financial aspects of the company.

But the last time we spoke with you, Matt, it was for an article on insight.tech, and we spoke about POSs transforming retail checkouts to improve customer experiences in stores. But customer experiences—I think that’s just one pain point that retailers are facing today, one challenge. So, that’s where I wanted to start the conversation off today. What are the different challenges retailers face today, in addition to customer service, in stores?

Matt Redwood: So, it’s a bit of a tough time for retailers. And I think, regardless of what sub-vertical of retailer you are in, I think most retailers are struggling with the same challenges. So, on one side, as you said, customer experience is key—the desire or drive to make sure that the in-store experience is as high as possible with this ever-changing horizon or landscape amongst consumers, that their expectations continue to rise. So, the horizon of expectation continues, and retailers are really chasing after that. And we are really starting, post-COVID, to see retailers really investing again very heavily in that in-store experience, which is great to see.

On the flip side, on the top line and bottom line, they’re being squeezed. I think you can read in the press global and economic trends that are driving the cost of goods up, the cost of freight up, the cost of managing and running their stores up. So, their top line is being squeezed, their bottom line is being squeezed, and they must find ways of driving efficiencies in the store while also delivering that great consumer experience. It’s a real balance between getting the economics of retail right, as well as satisfying the needs of your consumers.

And competition is as high as it’s probably ever been in retail, which is good in certain aspects. It helps with pricing and keeping inflation under control. But on the flip side it means that consumers are very flippant in terms of where they get their experience from and where they shop. If they get a bad experience in a store, it’s easy for them to flip to another brand and get a better experience of potentially better products, better prices. So, it’s a very dynamically changing environment, very difficult one for retailers today.

Christina Cardoza: I’ve seen a lot of retailers start adding new technology, more intelligent technology and sensors, to be able to do some of these things: collect data at the edge in real time so they can make decisions as they’re happening. A lot of this is being powered by artificial intelligence, and I think we’re in a stage or a point today in the industry where AI is everywhere, and everybody’s trying to use it and get the benefits from it.

So, from your perspective, how is AI being able to address some of those challenges that you talked about, and what’s the reality of it? What are the real benefits that are coming? Because I feel like sometimes there’s hype, but where can we start using and getting actionable insights?

Matt Redwood: I think 2023, for most people, will be known as the year of AI. It’s where generative AI really took off in retail, and we started to see more and more AI applications in the retail market. And certainly, some companies really jumped to what I would consider the end goal of AI—which is completely changing the technology landscape, completely changing the customer journeys, the staff journeys, how you operate and run your stores—with this kind of euphoric view that AI could remove all technology that existed within stores.

That’s what I call the hype curve. We’re coming through the trough and we’re going back up again, in that a lot of people realize that that technology, although fantastically advanced, was probably quite a way off being realistically deployable en masse. The cost of the technology was high, there were limitations in terms of the size of the store and the amount of products and the amount of consumers. So, trying to take that technology and apply it to retailers today wasn’t applicable.

So, what we are seeing, and what a lot of retailers have done, is kind of take stock of the situation, re-address what’s really important, focus in on the pain points, and then really go, again, with what we call point-solution AI technology: so, specific AI deployed for a specific use case to solve a specific problem, but is very much for that particular use case. And we’re starting to see more and more of these solutions being trialed across retail stores, not only in grocery.

And the possibilities are really—they are bountiful, and they’re kind of endless. And some of the examples that we’re seeing are everything from health and safety in store—using AI on top of CCTV networks to make sure fire exits aren’t blocked or there’s not foreign objects or liquid spill on the floor where someone might slip over. We’re using it for heat mapping to understand what is the flow of consumers around stores—how do I make that flow easier, but also how do I potentially commercialize that flow?

We’re seeing AI on top of existing technology—so, something very close to my heart: self-service. We’re starting to see more and more AI being applied on top of existing technologies to make them more efficient, to make them easier to use, to close loopholes, to boost the consumer experience. So, technologies like facial recognition for age verification.

I think we’ve all been in the situation where we’re trying to buy paracetamol or a bottle of wine, and you must wait for a member of staff to come over and approve your ID. That’s been compounded, the effect of that situation, by the fact that retailers are struggling to find staff. So now I’m having to wait a little bit longer to have a member of staff be available to come and approve my ID. Using AI in that environment drives greater efficiency at the frontend. It reduces that requirement on members of staff, and it boosts that consumer experience.

We’re also seeing technologies centered around the product. Item recognition, really, really taking off. Not just for non-barcoded items—where we’ve seen fruits and vegetables selection—but also all item recognition. And in some environments, particularly smaller stores, why should you have to scan the barcode when you can identify the item by its image? So that’s an exciting technology.

And then finally, something that we’ve been working on over the last 18 months, which is anti-shrink technology using AI. Obviously shrink is something that’s really gone through the roof in a lot of retail environments, driven by the cost-of-living crisis. And we are now working with a lot of retailers; we’ve got 54 different retailers we’re working with on anti-shrink technology in one form or another to try and close those loopholes and make it more difficult for those that are maliciously trying to steal, making it difficult for them to be able to steal. But also those that may have just been unfamiliar with a process or genuinely have made a mistake. Also making sure that we are catching that, without making that a bad experience for that particular consumer.

Christina Cardoza: It’s interesting; in the beginning of your response you mentioned how retailers, they were adding this technology to really transform everything, and they were sort of jumping to the end. And especially when you’re implementing artificial intelligence, which has so many connotations with it, so many misconceptions. It’s interesting, because I feel like these things need to be gradually introduced to consumers for them to be able to accept it, to understand it, to use it.

I can’t tell you how many times I’ve been in self-checkout, where we’re using AI or computer vision, and I can’t even put an item on the scale after I’m done scanning it because it needs to be in a bag, or I can’t bag it yet because of the bag weight. It’s just so complicated.

I know every retailer has different challenges and different areas of entry, but would you say there is an easier place of adoption happening right now to adding some of this intelligent technology? And then not only easier to adoption for consumers and for the store, but—like you were talking about the facial recognition—I know consumers have privacy concerns around that. So how can stores easily implement this that makes the most sense for consumers and for themselves and their business?

Matt Redwood: Sure. So, complex question. I’m going to break it down into parts. So, when we talked about retailers and some technologists rushing to that endgame, it really was about trying to boil the ocean with AI to try and completely change the landscape of retail. And I think sometimes what we forget is although the technology may exist, forget whether it’s commercially viable or practical to deploy it, you have to also have consumer adoption. If you don’t have consumer adoption, no one will use the technology in it. It’s worthless.

So we very much, we track the consumer-adoption curve, and we track the technology-development curve, and it’s important to find something broadly in the middle of those two in terms of what’s the right technology; what’s the right innovation and technology; why am I deploying it? Making sure consumers adopt it, but, crucially, making sure that it solves a need and it solves a business or a consumer desire. The “build it and they will come” mentality does not work with innovation, and it doesn’t work broadly with retail technology.

Consumers are savvy, and retailers are much, much more savvy in terms of deploying the technology. It must deliver. So we always recommend starting with data. A lot of people talk about data; there’s a lot of data that exists; it’s very easy to be swamped by data. We call it paralysis by analysis. There’s too much data out there. But if you can really segment your data to understand—if I’m looking at my transactional process or my customer journey—making sure that I’m looking only at the data that relates to that and highlighting the problems.

I’d say 98%, 99% of our customers that we work with now, we actually work well on a consultative basis to actually really deeply understand their stores, how they’re being run, and how their consumers shop in their stores. And the data provides a lot of insights to that. So really understanding and analyzing: How is the store operating today? Where is the friction associated with the staff journey or the consumer journey? Understanding and quantifying the effect that that friction really then builds the picture to say, “Okay, I’ve got a problem statement I want to try and solve. It’s having this impact on consumers and staff, and this is the impact to my business.” And that’s relatively easy to calculate.

The more problematic piece is then really finding the right innovation to solve that. And very much we try and put the consumer and the staff journey at the center of everything that we do. If it doesn’t provide value for the consumer, if it doesn’t provide value for the members of staff, and it doesn’t provide value for the retailer—that triangle of value is at the center of everything that we do. And if it’s not ticking all three of those boxes, we don’t put it into the range and we don’t put it into the solutions or the stores.

Starting with that data is a bit like the treasure map. It highlights where your biggest areas of inefficiency are and then provides the compass to kind of point you in the right direction of what’s the right technology that you should be deploying to the store to actually try and solve that particular issue. And when you break it down like that, and we start thinking about this kind of AI-boil-the-ocean vision, we start thinking about individual point solution, it becomes much easier because it’s much more manageable to deploy from a technology perspective, it’s much easier to develop a solution that works for a particular use case or problem that you’re trying to solve.

But it’s also then arguably very easy to measure how successful it’s been once you put it into the store. The difficulty then comes is what you don’t want to do is collect a whole group of point solutions that don’t talk to each other, and it becomes very, very difficult to scale. Finding the right AI platform that allows you to scale all of these point solutions on a singular platform is really, really important.

Christina Cardoza: Yeah. I love one thing that you said, which was basically, if it’s not solving a problem or if it’s not benefiting the customer or the business, then don’t do that. I feel like that’s a major problem that we have with implementing technology and seeing shiny new things. Let’s just add it to add it, but why are we adding it? It’s not going to get you a return on investment, and it’s not going to help your business if it’s not really doing anything for you. So, I think that that was a great point.

I want to come back to also that facial recognition example again—how obviously I think we’ve all dealt with self-service checkouts or checkouts where you’re scanning something, it doesn’t recognize it, you need a human cashier to come and help you, and that just bottlenecks the entire process.

But there seems to be a lot more self-checkouts in the store. How does the role of the employees come into this? I know, talking about the consumer misconceptions that they have, there’s a lot of misconceptions that this is going to replace employee jobs, and especially when you see that it—there’s not a lot of cashiers on the floor anymore. So where does the human element come into play with some of these?

Matt Redwood: So, the human element is really, really important to self-service, and it’s an element that’s quite often overlooked. If you look at the evolution of self-service, self-service was originally designed as a POS, an attendant till replacement, to ultimately remove the cost of the staffing from stores. But self-service has been around for 20, 25 years now, and the drivers for deploying self-service are very different today compared to 20, 25 years ago.

I’d say 100% of the retailers that we deal with are either putting in self-service—and they might be on their second or third iteration of self-service because they’ve been in that business for a while—or they’re putting self-service in for the first time. A lot of retailers outside of grocery are just trying self-service for the first time. The approach is very, very different, and it’s very much less about removing staff from the equation, more about staff redistribution. The inability to attract and retain staff in retail is a real big problem for retailers, so they have to use their staff wisely. And where the consumers value the staff interaction the most is where they need it, and where they need it is where they generally they need help either navigating the store, finding an item, asking a question about a particular item, or just general assistance.

What self-service is really playing a major role in retail today is it unlocks that member of staff. So I would say to anyone that looks at self-service and says, “Oh that’s going to replace people’s jobs,” it’s not; it’s very much about labor redistribution now. It frees up a cashier that could be sat behind a till for a 12-hour shift to be up on their feet, engaging with consumers shoulder to shoulder in the aisle where it really makes sense to deliver that consumer experience.

Particularly through Covid we saw retailers that that had self-service had much greater flexibility of operations within their store. Post-Covid we’re now seeing that that allows them to boost the level of consumer experience where it really counts. Obviously, there’s always been friction associated with self-service and the adage of “unexpected item in the bagging area”—all of those common friction points perceived with self-service, they’re starting to really drain away.

A lot of focus has been put on fine-tuning and making sure that the base technology works to a much, much more acceptable level. And we’re now seeing self-service that’s very efficient, that generally most of the time you can sell-through a transaction with no intervention, no requirement for a member of staff to come over. We are now in the fine-tuning era of self-service, and why I say fine-tuning is we’re really looking for that last 5% or 10% of efficiency gains.

So, Diebold Nixdorf, we’ve really focused on three core solutions initially out the bag, and those three core technologies have been developed because we identified via the data where the biggest friction points were. So, age verification: 22% of interventions broadly are age related. That’s a big number. If we can use facial recognition to identify the age of the consumer and remove that validation process that’s happening—A, much better experience for the consumer; B, it means faster transactions. Faster transaction means less staff requirement at the till, but it also means that consumers are moving through the frontend quicker.

So that means less queues. Less queues—queuing is the biggest bugbear of consumers when they get to checkout. So, we’ve removed two of the biggest friction points, associated with checkout with one piece of technology. Item recognition, particularly in grocery for fresh fruit and vegetables, was another area of frustration from a consumer perspective. But also inefficiency from a retailer perspective: spending 20, 30, 40, 50 seconds, trying to find the type of apples that I’m looking to buy is frustrating, but it’s also time consuming. So, using item recognition to identify those apples so the consumer doesn’t have to run that process again. Good consumer experience, great productivity gains.

And then finally shrink. We touched on it a little bit earlier, but obviously retail shrink has really gone through the roof, and I think a lot of retailers are battling to really understand: where is there shrink happening? So of course, the natural progression in that argument would be to say, well, self-service is a natural place for shrink because it’s unmanned in a lot of environments.

But what we’re actually finding is: There’s two different types of people that steal. There are people that maliciously try and steal and those that have just made a mistake and it’s genuinely unmalicious. And how you treat those two individuals has to be dealt with very, very differently, because you don’t want to alienate or embarrass the consumer that’s genuinely made a mistake.

For those that are maliciously trying to steal: unfortunately, if we close all of the loopholes and make it impossible to steal at self-service, they will find somewhere else in the store to go and steal. So, we’re in this kind of Whack-A-Mole-type environment, where we’re trying to close all the loopholes as quickly as possible. We’ve really focused our efforts on AI with behavioral tracking. And the reason why we use behavioral tracking is once you can start to identify behavior, it doesn’t matter where you deploy the technology within the store, you can identify malicious behavior and that shrink environment.

We very much focus on the frontend first: we’re deploying shrink onto self-service checkouts and onto POS lanes. But the idea is that the next natural evolution is that then run that same solution onto the CCTV network, and then we can identify shrink anywhere in the store. The human element of this is really important because it’s relatively easy to identify if someone has stolen. What you then do in that scenario is a difficult situation.

What you don’t want to do is alienate a consumer that might have non-maliciously stolen. If they’re maliciously stealing you also need to deal with that in a particular type of way, but you also don’t want to put your staff, your cashiers in your store: A, in danger; or B, in an environment that they don’t feel comfortable with. So, we are very much putting the human element back into this, that, depending on the use case of the theft, we will then deal with that situation differently.

But what we will always do is put the information in the members of staff’s hands so that they can deal with that situation in the way that they see as appropriate. So with all of our shrink solutions—whether it’s on self-service checkout or POS—once the shrink instance has been identified, an alert is sent to a member of staff’s wearable technology—whether it’s smartwatch or tablet or phone or even their POS lane—they’re notified that there’s a shrink instance that’s happened, they know where it’s happened, and they can even review the video clip.

So now they’re empowered that they know what’s happened in that situation, they know what to look for. And then staff training really comes into play here, and we have a number of great partners that we work with on staff training who actually work through these scenarios to give the staff members the toolkit so that when they approach that member of the public and they’re approaching them knowing exactly what’s happened, they’re trained to be able to deal with that situation in the most agreeable way possible—to disperse any aggression or any risk that might be associated, but also to make sure it’s a good experience for that end consumer. So, the technology is only one-third of the actual solution; the human element is a massive part of it that shouldn’t be overlooked.

Christina Cardoza: And I think the change in roles and responsibilities for cashiers to being able to have more meaningful interactions with customers—that’s not only benefiting the customer experience, but that’s also benefiting the employee experience as well, maybe keeping employee retention. I was a cashier in college, and I can tell you that is a tedious and redundant process. I would have dreams of just scanning food and shouting out numbers. And it’s not only retail shrink and loss—I think it’s not only with malicious actors or by accident—but sometimes as a cashier I would hit the wrong number just because I was on autopilot going repeat. It was an error-prone process. So, I can see that helping it as well.

You mentioned that to really be able to be successful you need an AI solution that connects all of these together so that this is not happening in silos and the data is actually actionable. Obviously, we’re talking to Diebold because you guys are a leader in this space. So, I’m curious to hear how you are helping customers—if you have any real-world examples or case studies that you can share with us.

Matt Redwood: Yeah. And I’ll be completely honest: we fell into the most obvious trap, looking back at our journey on AI. We’ve been working on this now for two and a half years, nearly three years. And we originally, we went out to market to try and find the best solutions to solve these three use cases, but what we quickly found were there was lots of different competing technologies. There were a lot of potential third parties that we could have worked with, but the underlying technology was the same.

And we quickly realized that actually as a solution provider who retailers work with to actually build out their technology—not just across their checkout but all the way across the store—it was unrealistic to think that we could have 20 or 30 different solutions, all in the AI space, all providing different use cases, but none of them talking together.

So, we actually kind of paused our program and redesigned our go-to market strategy, which was very much focused on providing an AI platform, and we work with a third party in this space who have a very, very mature AI platform. We’re entering the retail market and didn’t necessarily have the applications to run on top of it. So, we’ve worked with them to actually develop out these three applications as a starting point in the AI space.

But the nice thing about the AI platform is it effectively becomes the AI backbone for anything the retailer wants to do within their store from an AI perspective. This means that we can really satisfy our openness. It’s an ethos that we drive in our product strategy, which is openness of software. And what we mean by that is we provide the building blocks for retailers. We are the trusted partner, we’re the integration partner, but if there’s a particular third party out there who has got the market-leading solution in a particular area, it doesn’t make sense for us to go and reinvent the wheel.

So, when we talk about openness, the ethos that we take to our retail customers—but also that permeates through our R&D and product-management ethos—is to very much work with the best of breed within the market. And our strategy is to basically provide this AI platform for retailers. We will provide applications that can sit on top of it—like age verification, shrink reduction, item recognition, process or people tracking—but if there is a particular partner out there that is market leading in health and safety, we can plug them on top of the platform.

And what that means is the retailer can build this kind of ecosystem of AI partners, all providing best-of-breed solutions, but, critically, they’re all plugged into a single platform. So, they utilize the same business logic; they utilize common databases, like item database or loyalty schemes and things like that. That makes the solutions very, very scalable. It makes them much easier to manage, but it also means that they’re all talking to each other.

And the beauty of AI is it is self-learning to a certain extent. So, the more applications that we plug into this, the more physical touch points that we have in the store, the more information is flowing through the platform and then the quicker it can develop and the quicker it can learn. So, it’s very much a self-perpetuating solution that we’re very much at the beginning of this journey.

As I say, we’ve got about 54 different customers using AI in one form or another. But we very much see this as a much, much longer journey, where we’re starting to build an ecosystem of solutions that will ultimately move us towards what we call “intelligence store.” And intelligence store for us isn’t necessarily removing the physical touchpoint or removing the technology; it’s about providing intelligence to retailers.

And what I mean by that is every device that sits in the store is effectively a data-capture device. And that’s a two-way street: you can push data down to them, you can pull data back. So, whether it’s a shelf edge camera, or whether it’s a staff device or a self-service checkout or a scanner or a screen—these are all data inputs. There might be AI point solutions associated with them, but the AI platform allows you to connect all of these together and create an intelligent store, where intelligence really permeates every single area of the store.

It does mean there’s a huge amount of data available, but I think the retailers that are really going to advance quickly are the ones that work out what to do with this data. Because it can and it should inform every single decision or direction that you take as a retailer—whether it’s how I price my products, where my products are positioned within the stores, how I afford loyalty systems to the consumers, how I staff my stores, how I operationalize them—but also what technology exists within the stores.

So, data—it’s a cliché—but data will form the basis of every single decision that we make—from either a technology perspective, solution-provider perspective, but also from a retail-operations and a store-design perspective as well. So, it’s a really, really exciting journey that we’re on.

Christina Cardoza: Absolutely. And I think it’s really important to find a solution provider that is willing to work with others in the industry and leverage their expertise. I think that helps prevent vendor lock-in; it allows you to take advantage of the latest technologies and enables you to innovate faster, working with some of the best partners in the market. So, speaking of best in the breed, insight.tech and the “insight.tech Talk,” we’re obviously sponsored by Intel, so I’m curious if there’s anything you can tell us about that partnership and the technology that you use to make some of your AI retail solutions happen.

Matt Redwood: Absolutely. So, we work very, very closely with Intel—not just on the AI topic but from our core platform itself. Intel very much underpins a large part of our portfolio, so we have a very, very close working relationship with them—not just on the solutions that we deploy into stores today but also our roadmap on our development. We work very closely with Intel on their developments: where they’re going with their solutions and how we can better integrate them into our solutions to give retailers better solutions but also much, much greater flexibility for the future.

And I think a good example of that is probably the speed of development of technology. If you think about traditional point of sale or self-service checkout, if you go back five or 10 years, a retailer will make a choice for that particular type of technology and that would sit in that store for five, seven, 10 years sometimes, as long as the technology is running. The speed of development of technology is in increased immeasurably. The expectations of consumers have also increased immeasurably. And so balancing those two is really, really key.

Where we work very, very closely with Intel is on some of their scalable platforms. So, knowing that retailers have a requirement today—but particularly with these AI topics—the amount of computing power that will be required in three or five or seven years will be very, very different to the requirements today. So, providing retailers the ability to scale this technology so that whatever they deploy today is not throw-away in two years’ time. That they can evolve it and scale that technology to meet their technology requirements at that particular time is an absolute game changer. And that’s something we’re working with Intel very, very closely on.

Christina Cardoza: Yeah, absolutely agree. Things are changing every day: not even five, six, seven years from now, but five weeks from now things can be completely different. So being able to scale and to adapt is especially important in today’s landscape.

Well, it’s been great hearing about all of these solutions, especially how Diebold is helping retailers from end to end with the item recognition, facial recognition, and retail shrink. We are running out of time, but before we go, Matt, I’m curious if there are any final thoughts or final takeaways that you want to leave our listeners with today.

Matt Redwood:  I think there’s a lot of misconception, particularly around AI. What I would say is: start with the data. Identify the business requirements or the problem that you are looking to solve, and then find the right provider that’s going to enable you to deliver against those requirements today but also gives you that longevity of scalability. Because AI is a journey; it’s very much a solution that learns over a period of time. So, choosing your solution provider is extremely important, because it is a marriage, and it is a long marriage, and you have to make sure that you’ve made the right choice. So, use the data to help inform those decisions, and, yeah, it’ll be very, very exciting to see where AI and retail technology goes, over the next two, three, five years.

Christina Cardoza: Yeah, absolutely. And I would say also: choose a partner that you can trust and is transparent about how they are using the data. Like with the age verification for instance, you want to make sure that that data isn’t being saved or that anything going into that—that system is going to protect your privacy and your information.

Matt Redwood: Absolutely. Data privacy is absolutely key and is a very, very careful consideration when you are designing or choosing the solution that you want to deploy to stores.

Christina Cardoza: Excellent. Well, thank you again for joining us. I invite all our listeners to visit the Diebold Nixdorf website: see how else they can help you in the retail space. As well as insight.tech, where we’ll continue to cover partners like Diebold and the latest trends in this space. Until next time, this has been “insight.tech Talk.”

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Assisted Checkout Boosts Customer Satisfaction

Self-checkout seemed like such a great idea: Let grocery and convenience store customers skip the lines, scan and pay for their own merchandise, and be out the door—freeing employees for other duties. But reality doesn’t always measure up to the vision. Lines for self-checkout often exceed those for staffed lanes. Customers take longer to check items than experienced cashiers, and may become confused or make mistakes, requiring them to wait for assistance. And for retailers, shrinkage is a major pain point.

Some stores have experimented with autonomous (cashierless) “just walk out” payment systems, but these stores may offer only a limited selection of goods and require significant technology investments.

Computer vision-assisted checkout—backed up by store personnel—may provide the happy medium both retailers and their customers seek. Fast and accurate, it eliminates the need for item-by-item scanning and allows stores to preserve the “service with a smile” tradition that keeps people coming back.

Smoother Retail Checkout Solution

Retailers are as frustrated as their customers by service delays, but chronic labor shortages and rising wages often prevent them from hiring additional staff, says Aykut Dengi, CEO and Co-founder of RadiusAI, a computer vision company focused on AI technology solutions for retailers.

To get a better handle on wait times, retailers started measuring how long it took for customers to get to a checkout stand and complete their transactions. The numbers weren’t good.

“They asked us if we could provide technology to solve the problem. So, we created ShopAssist,” Dengi says. ShopAssist replaces checkout-counter scanners with computer vision cameras, which work much faster and require minimal labor from customers or cashiers.

With ShopAssist, customers unload a basket of goods onto the counter. In less than a second, the cameras recognize each item within the group. An itemized bill showing prices, product images, and total cost is displayed on both the customer and cashier screens. The customer is then free to complete their transaction on their own. If they want to use a coupon or purchase an item requiring an ID, ShopAssist immediately informs the cashier for assistance. The interaction between the cashier and customer is face to face with ShopAssist, as in traditional cashier checkouts. This helps create a favorable experience for customers and employees alike.

In addition to speeding transactions, the computer vision system helps prevent shrinkage, a growing problem for retailers, especially at self-checkout. For example, a person may take a barcode sticker from an inexpensive item and place it on a higher-value product. This technique won’t work with ShopAssist, which can read barcodes but pays more attention to a product’s image—just as a human would do. Computer vision also prevents problems from customers who neglect to scan items or scan them improperly.

Improving Retail Inventory Management

Shrinkage and scanning errors not only cause retailers to lose revenue but also lead to inaccurate inventory tracking. Merchandise without barcodes, such as food service items, are particularly problematic. For example, some stores might offer a variety of grilled food items, such as hot dogs, taquitos, and burritos. These items likely have varied costs and, if not accurately charged to the customer and accounted for, the store will lose profit and inventory will be incorrect. Drink dispensers also cause issues when customers use soda cups for iced coffee, for example.

As convenience stores look to increase the popularity of their offers, freshness and availability are critical. “Prepared food is a growing profit source at convenience stores, but if you cook the wrong items, many end up in the trash,” Dengi says. “ShopAssist visually identifies items correctly, lowering food waste and contributing to the bottom line.”

ShopAssist’s flexibility in product tracking allows merchants to include a wider variety of goods than technology that limits the types of items that can be sold. Autonomous checkout also restricts the way products can be displayed and are complicated and costly to install. “Placing cameras on every shelf is a formidable expense,” Dengi says.

The ShopAssist software platform relies on the performance of Intel processors for visual tasks and Intel GPUs for faster inferencing—helping to identify a wide variety of merchandise quickly, including items the system hasn’t seen before. Trying new products is important to retailers. “A typical store brings in a hundred new items a week, which often include local or specialized vendors and hometown favorites,” Dengi says. “ShopAssist easily captures new images of products not yet introduced to the point-of-sale system and federates them across the enterprise, saving time and expense.”

When a product is first introduced, the cameras read its barcode in addition to capturing its image and the technology learns to associate the two. RadiusAI uses the Intel® OpenVINO toolkit to continually optimize ShopAssist processes, including product recognition.

RadiusAI also works with retailers and systems integrators to tailor ShopAssist hardware or software to individual needs. For example, in addition to enabling computer vision, the Intel processors can be used to run other devices in the store.

“Retailers are adopting more edge solutions, and they’re familiar with using Intel hardware,” Dengi says. “For example, they can start with using the ShopAssist system for checkouts and later decide to manage their ovens on the same computer.”

Adding the RadiusAI solution called ShopAssist Pulse, retailers can expand the power of assisted checkout, inventory management, and food operations by using the existing store cameras.

“If someone picks up two slices of pizza and puts them in the same box, or eats one while shopping, the system may recognize the customer and correctly charge for two slices. It can also notify staff, allowing a non-confrontational loss prevention strategy. They may also want to put another pizza in the oven for the lunch rush,” Dengi says.

“When it’s implemented the right way, #ComputerVision allows employees to help when necessary, without creating significant overhead” — Aykut Dengi, @RadiusAI via @insightdottech

Preserving the Social Experience

While customers appreciate speedy transactions, they also value customer service and human interaction—elements that are often missing at autonomous self-checkout. “People don’t go to the store just to buy things, they chat with the employees. It’s a social experience,” Dengi says. Many self-checkout systems are often located away from store staff. This leads to increased loss prevention, slower transaction times, and customer satisfaction issues.

In the future, retail technology itself may become more personalized. For example, RadiusAI is working with CPG companies to create on-the-spot generative AI promotions based on a customer’s purchases. “The best technology is invisible to customers and employees,” Dengi says. “When it’s implemented the right way, computer vision allows employees to help when necessary, without creating significant overhead.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

AI in Restaurants Helps Lower Costs and Grow Sales

From dine-in restaurants to quick-serve takeout and workplace cafeterias, customers expect not just great-tasting food but fast and convenient experiences, too. At the same time, the food service industry is looking for more efficient operations, accurate inventory management, and a boost in profitability.

To stay ahead of the game, large food retailers rely on transformational technologies like edge AI and computer vision. Platforms powered by these technologies enable retailers to transition from time-consuming conventional checkout to the efficient self-service their customers want. Plus, they provide valuable data that helps understand customer preferences, minimize wasted resources, stock shortages, and suboptimal performance.

Shanghai Kaijing Information Technology Co. Ltd., a leading IT solutions provider, developed its AI-Powered Automated Checkout Services to overcome food service challenges, meet new customer demands, and create new opportunities.

To stay ahead of the game, large food #retailers rely on transformational technologies like #EdgeAI and #ComputerVision. Shanghai Kaijing Information Technology Co. Ltd. via @insightdottech

Food Service Retail Transformation at Work

Take, for example, LaoXiang Chicken Catering Co., a quick-service restaurant (QSR) chain with a network of more than 2,000 locations across China. The restaurant’s goal was to overcome a series of challenges such as tracking performance and maximizing resources across all store locations by:

Managing operations at scale: A holistic understanding of operations across all stores without diminishing service quality was the top priority. With a plan for significant expansion—to a count of 10,000 stores—the company needed to make sure the entire operations continued to run smoothly even with this monumental growth.

Increasing profitability: QSRs typically run on low profit margins due to high food and rent costs. But their largest expenditure is on staff, which can account for 30% of monthly costs. And with a growing number of stores, food waste was eating into profit margins.

Improving the customer experience: One of the challenges the restaurant faced was maintaining consistent performance across the chain—and a rapid expansion increases the problem. Regardless of where customers dine, LaoXiang Chicken needed to guarantee a uniform and positive experience across every store. To maintain a positive brand reputation, the company needed to overcome issues like slow service, long lines, inconsistent food quality, and unhygienic areas.

Enhancing operational transparency: The company wanted more visibility into store results to identify the best and lowest performers. This information would allow management to pinpoint which factors contributed to the results and implement corrective actions. The roadblock was they were using outdated and inefficient manual methods to do so—making for a nearly impossible task to perform at scale.

It became clear that LaoXiang Chicken required the Shanghai Kaijing Canteen solution, an AI-Powered Automated Checkout Services end-to-end digital retail platform. The platform offers AI and computer vision-enabled function modules for applications such as product recognition and weight measurement, pricing, facial recognition, payments, data analysis, comprehensive system management, and more.

Because food service locations have different physical layouts and products, the POS stations come in three form factors—desktop all-in-one, counter-style checkout, and vertical checkout—giving food retailers like LaoXiang Chicken the flexibility needed to accommodate every store (Figure 1).

Picture of three Canteen checkout stations
Figure 1. Automated checkout systems come in three models: desktop all-in-one, counter-style checkout, and vertical checkout POS. (Source: Shanghai Kaijing)

AI in Restaurants Deliver Measurable Results

Working with Shanghai Kaijing, LaoXiang Chicken achieved significant results, which included the ability to:

  • Reduce the need for manual checkout tasks and customer checkout time via real-time, product SKU identification, and transaction bill generation.
  • Predict peak dining times by analyzing customer flow patterns, enabling managers to understand traffic trends, proactively prepare for customer surges or downtimes, and make informed operational and resource adjustments.
  • Minimize food waste with effective inventory management, ensuring stores optimize stock levels based on accurate consumption pattern forecasts.
  • Improve standard operating procedures related to food quality, kitchen hygiene, and adherence to safety regulations by leveraging AI and computer vision AI-driven analytics.
  • Empower management with a high-level overview of performance metrics across store locations, identifying weak areas of operations so necessary adjustments can be prioritized and promptly addressed.

“Overall, our platform enabled LaoXiang Chicken to implement strategic decision-making, which contributed to revenue growth and improved customer experience,” says Zhengting He, CTO of Shanghai Kaijing. Each store achieved +/- $450 reduction per month in labor costs and up to 80% improvement in checkout efficiency—eliminating customer loss due to long lines. In addition, the company saw a 10% sales increase per day leading to an annual +/- $38,700 profit increase in each store.

Tech and Tools Power AI in Restaurants

To enhance Canteen’s platform performance, the company turned to Intel technologies. Intel® Core processors and edge AI technology provide the performance needed for near real-time SKU identification with a 99% accuracy rate. “Our testing shows that this level of performance and accuracy facilitates an average checkout time of three seconds,” says Zhengting He. And with its advanced computer vision capabilities, the Intel® OpenVINO toolkit optimizes inferencing performance.

The Intel® oneAPI Video Processing Library also plays an important role in Canteen’s video analytics capabilities. For example, the advanced hardware and software capabilities on Intel® GPUs allow AI-driven quality and compliance checks to run off-hours.

Shanghai Kaijing goes beyond delivering advanced products like Canteen by providing other services, including customized consulting, product lifecycle support, CRM, and data analytics to help optimize supply chains and improve operational efficiency.

The company provides services that cater to diverse client needs, ensuring their sustainable growth strategies and adherence to industry standards. “Our leading customers are expanding rapidly, and we believe such trends will continue,” says John Yang, Shanghai Kaijing CMO. “We are excited to help companies like LaoXiang Chicken to continue working toward its goal of providing quality and affordable food to people all over the world.”

 

Edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Game-Changing Tech Takes Event Experience to the Next Level

Pulling off an event like the Olympic and Paralympic Games involves intricate behind-the-scenes work. From connecting people through private 5G platforms to creating virtual experiences and using AI and digital twins for planning and execution, expertise and reliable partnerships are crucial.

This podcast explores how advanced technology is leveraged to create interactive and immersive event experiences, the essential partnerships involved, and a forward-looking perspective on future innovations in event management.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guest: Intel

Our guest this episode is Sarah Vickers, Head of the Olympic and Paralympic Program at Intel. Sarah has been working on Intel’s Olympic and Paralympic program for about seven years. She’s responsible for all aspects of the games, including operations, guest experience, and ecosystem support.

Podcast Topics

Sarah answers our questions about:

  • 1:13 – Intel’s involvement in the Olympic and Paralympic Games
  • 2:58 – Event preparation before, during, and after the event
  • 7:01 – The process of launching a large-scale event experience
  • 8:32 – What happens with all the data after the event
  • 10:31 – New technology innovations for event experiences
  • 11:58 – The value of Intel’s ecosystem and partnerships
  • 12:48 – Applying Intel technology beyond the Olympics

Related Content

For the latest innovations from Intel, follow them on Twitter at @Intel and on LinkedIn.

Transcript

Christina Cardoza: Hello, and welcome to insight.tech Talk, formerly known as “IoT Chat”, but with the same high-quality conversations around Internet of Things, technology trends, and the latest innovations you’ve come to know and love. I’m your host, Christina Cardoza, Editorial Director of insight.tech. And today we’re going to be talking about how technology can uplevel event experiences with Sarah Vickers from Intel.

But as always before we get started, let’s get to know our guest. Hi, Sarah. Thanks for joining us.

Sarah Vickers: Hi, it’s great to be here.

Christina Cardoza: What can you tell us about yourself and what you do at Intel?

Sarah Vickers: So, I’ve been with Intel about nine years, but I’ve been working on our Olympic and Paralympic program for about seven. Currently I’m responsible for all aspects of our Olympic program, which includes games operations, our guest experience, and anything to support the ecosystem.

Christina Cardoza: Great. And of course, the Olympic and Paralympic Games are happening in Paris soon. So, very exciting. I wanted to start the conversation around there. We’re going to be talking about event experiences, but since the Olympics is such a timely event, I wanted to see if you could give us an overview of Intel’s involvement at the event. What motivated you guys to become a technology partner? You said you’ve been doing it for the last couple of years, so how has it evolved?

Sarah Vickers: Sure. One of the things that Intel loves about the Games, is that it is really the largest sporting event, and most complex sporting event, on Earth and has billions of watchers around the world. So it’s a really exciting opportunity for us to demonstrate Intel’s technology leadership in a really scalable way.

We’re not doing this to have proofs of concept; we’re actually integrating our technology to help with the success of the Games. And we think about that in a variety of ways, because there’s so many different aspects to call a Game successful. You’ve got the really complex operations to deliver the Games—moving athletes and fans and volunteers around, getting people from A to B. That’s complex in itself, but do that across 17 days across so many sports. It’s super complex.

You’ve got the broadcast experience—so, billions of people watching at home. That’s just evolved and become more complex when you think about all the different devices and how people consume media. So we do a lot of applications working with Olympic Broadcasting Services to deliver outstanding experiences based on Intel technology.

You’ve got the fan experience, whether that be, again, operationally, ease of getting around, versus actually how do you entertain during the Games. The sports itself provide great sense of entertainment, but there’s all that in-between time. What can we do to help them make that experience even better?

Christina Cardoza: And you talked about how this is such a large event over the course of a couple of days. I can imagine how complex it is during those days, but how is Intel technology being used behind the scenes—not only during the event itself, but how are you preparing before the event, making sure everything is up and running and it’s a smooth experience? And then what happens after the event? Because I’m sure it’s not just for the actual live sessions.

Sarah Vickers: We start working on the Games years before, with the International Olympic Committee (IOC), with the International Paralympic Committee, the organizing committee—in this case, Paris 2024—to really try to understand what are we trying to solve? How can we take what we’ve done in the past and make it better? Or, what are the new challenges that have evolved since the last Games?

So we really work it as a partnership and really think about what are we trying to do. We have taken solutions that we’ve done in Tokyo and made them better. So, in a good example of that is we do what we’re calling digital twinning. Digital twinning is the opportunity to look and have a digital twin of all the venues and really understand what the venues are going to be like in a 3D way.

How this helps is, if you think about broadcasters, they really need to understand where camera placement’s going to be and how that’s impacted by different things. If you think about the transition from the Olympic Games to the Paralympic Games, you’ve got a lot of changes that you need to do for accessibility and things like that for the athletes. This makes it possible to do those things in advance, rather than doing it as it happens and figuring out, oh, this solution actually doesn’t work.

So there’s a lot of benefit to that, as well as just the opportunity to reduce travel. You can do it from anywhere, you can do it from your PC. So it makes it really easy. Another use case that we’re helping out with, from an operational perspective, is really just understanding the data. So there’s a lot of people behind the scenes, right? If you think about all the media that’s on the ground, all the workforce, we’re helping the International Olympic Committee and Paris 2024 understand that people movement to optimize facilities for them.

So that could be either making sure that we’ve got the right occupancy levels, making sure that people have the right exits and entries—really using that data to make real-time decisions based on that data. But what that also does is it helps inform the next Games because they’ve got a base set of data that they can use to help model and plan for those complicated situations.

A final example that I’ll give, just from an operational perspective, is on the athletes’ side. This is the athletes’ moment. For some of them it’s the highest moment in their career. And really what you want to do is make it as uncomplicated as possible. You want them to be able to focus on their performance and not think about the things that they have to think about to get to that performance. So whether that be food, whether that be transportation, whether that be accommodations—there’s so many different things while they’re there that they need to think about.

We’ve worked with the IOC, and we’re implementing a chatbot for them for these Games. So, really a chatbot based on our AI technology platforms. What that’s going to do is it’s going to enable athletes to ask questions, get conversational answers about day-to-day things. And that will continue to get smarter as we get more answers and understand what’s working. So that’s going to be used throughout the Games, which I think is going to be a game changer for athletes.

Christina Cardoza: Yeah, absolutely. And talking about things getting smarter, I think it’s probably so exciting to see how technology has evolved over the last couple of years, and now you’re able to leverage all of these new tools to help make the Games better.

You mentioned digital twins. I’m sure you are using some Intel® SceneScape behind the scenes. And then there’s all this AI and all the processors that Intel has to really make everything, like you mentioned, real time: make sense of the data, make sure that you can make informed decisions in real time. And all of this, I’m sure, is happening at the edge so it is low latency and we’re getting all this information as quick as possible.

You said you guys start preparing for the Games years in advance. I’m sure there’s a lot of planning and preparation that goes into this. Just looking at not only how are we going to integrate all this technology and make it make sense and eliminate some of the silos, but what it looks like during the event, what it looks like after the event. So, how do you start off years in advance? Walk me through the process of getting from: the Games are coming up, this is how we prepare, and then this is how we launch it.

Sarah Vickers: Really what we do is we sit down and say, “What are the things that need to be delivered?” Right? There’s a set of expectations for every Games, and then there’s that set of expectations of what do we want to do that’s different? And it’s really a process where we sit down and we ask those questions: both, what are you trying to solve, what are you worried about; and what are the things that need to happen?

And then we do an assessment and say, “How can Intel’s technology help?” And we work very closely with a number of partners to try to figure that out. Then we develop a roadmap of solutions. And then we have—for each roadmap of solutions,—it’s typical technology integration, where then we have a plan and a PM that works closely with those stakeholders to deliver that.

Some of those solutions are delivered in advance. So, digital twinning for example, that’s not really—the benefit of that is not during the Games. The benefit is really before the Games and months before the Games. So that solution’s been being used over a big period of time. And then you’ve got other solutions that are obviously for during the Games. So it really depends on the technology integration of what that process looks like. And then hopefully during the Games everything goes smoothly, and we just can enjoy it and watch our technology shine. But we have staff on site to make sure that everything runs smoothly and goes off without a hitch.

Christina Cardoza: Is there anything that happens after the Games? Any more work that’s being done on Intel’s side to make sure that if there were recorded sessions or recorded games, or anything that we point to post-event?

Sarah Vickers: I think if you think about what happens for the Games, there’s so much data, right? So there’s so much data, and data means so many different things. So you’ve got content, right? When you’ve got broadcast data, you’ve got all the highlights and all those things that are being done. You’ve got all the data that we’re helping the IOC collect to understand people movement and things like that. So that data is definitely being used to help plan the next set of Games. When you think about broadcast, that broadcast information is being used to create models and understand for future Games as well, or future entertainment.

One of the really interesting use cases that we’re working on with Olympic Broadcasting Services is AI highlights. We are actually creating highlights using artificial intelligence platforms, and that’s going to help create highlights that just weren’t possible before because they were all generated by people, and there were only a certain amount of people that could do that over time.

But if you think about what we talked about earlier, where how people consume broadcast is changing, people are much more demanding on their expectations of broadcast and want things that are a little more personalized. And you’ve got 206 different countries participating in the Games, multiple languages, multiple sports. And there’s countries where certain sports are really important and that aren’t important—some of the bigger countries that you would see that usually dominate this space.

So what the AI highlights can do is generate highlights that are really customized based on certain things. This is really exciting, and we’re going to see this evolve over time, because what will happen is the models will learn over time and they’ll get smarter, and then you’re going to have even better and more awesome highlights for the fans.

Christina Cardoza: Yeah, I was going to ask if there were any lessons learned that you have experienced over the last couple of years that you’re bringing into this event, or if there’s any new technologies and innovations out there that you’re excited to use. It sounds like it’s AI and digital twinning this year. Is there anything you wanted to add to that?

Sarah Vickers: I mean, I think when you think about AI and Intel and the whole idea behind “AI Everywhere,” really this is really excellent grounds to demonstrate how Intel’s AI platforms will really change a lot of aspects of the Games. So we’re really excited about a lot of our activations that are demonstrating what we can do with AI, and I think what’s happened over time is just technology and AI have gotten smarter; it’s becoming more mainstream. So you’re just going to see more of that, because that’s what the expectations are. And we can use that data—the compute is possible now—to build those models. So we’re going to have a lot of different AI applications throughout the Games.

Christina Cardoza: It’s interesting looking at an event at such a global scale, because at insight.tech we write a lot about Intel® Partner Alliance members, how they partner together with Intel to make various different things happen: digital signage in stores, the data analytics, the cameras, the people occupancy. It sounds like all of this is happening at the event. So, all of these technologies that we’ve been talking about, that our partners are working with Intel to make happen, it’s such a scale that it’s this end-to-end solution. Everything is happening at the Olympics: the networking, the real-time analytics, everything at the edge.

So I’m curious, what is the value of Intel’s ecosystem and the partnership to make something like the Olympics and the Paralympics happen?

Sarah Vickers: Intel doesn’t do things alone, right? Like you said, we rely on strong partnerships to help deliver that. We really work and try to understand what solution is best and then work with that ecosystem to help deliver that. And that can be a variety of types of partners. So, we have the lucky opportunity to work with some other top Olympic partners, and then we work through some of our other partners at the local level, and we work across our ecosystem to help make this happen. We definitely cannot do it alone.

Christina Cardoza: And of course we’ve been talking about all of this technology in context of the Olympics and Paralympic Games, but there are other events and other use cases I think some of this could be applied to. So I’m curious, how can Intel technology be used beyond the Olympics? What are some other industries or sectors that you see some of the things that you’ve been doing to prepare during, before, and after the event in other areas?

Sarah Vickers: Sure. I think there’s—almost every application that we have—there’s an application for that both at other events, but also beyond sport. So I think the way we think about it is, how does this demonstrate what we can do, and then how does that scale?

I’ll give another example of a use case that we’re doing that’s a really fun application of AI platforms, which is really what we’re calling AI Talent Identification. We are using AI to do biomechanical analysis to help fans that are going to be at athletics and rugby understand which Olympic sport they’re most aligned to. So they’re going to do a bunch of fun exercises, we’re going to mash up that data, and then tell them, “Okay, you are most likely to do this.” And that’s just a fun application of AI.

But if you think about what that biomechanical analysis can do, that can be used in a variety of ways. If you think about physiotherapy, if you think about occupational health, there’s a lot of different ways that this can help improve people’s lifestyles that you can use this same application. You think about digital twinning—that application has gone beyond, and you’re seeing a lot of that in manufacturing, in cities, in all of these different aspects that this type of technology will have that opportunity to help benefit the outcome of whatever their goals may be.

Christina Cardoza: Yeah.  That reminds me of the demo Pat Gelsinger did last year at Intel Innovation, where he was trying to—I believe it was being a soccer player and learn how he could improve his skills using AI and some of these biometrics. So it’s great to see that from last year how it’s advancing, and how it can actually be used in the real world, and how it is actually being implemented in some of these areas. So, exciting to see this technology.

I’m curious—I know we’ve covered a lot about the Olympic Games, are there any key takeaways that you think our listeners should know about doing an event at such scale using Intel technology? Any final thoughts you want to leave us with today?

Sarah Vickers: The Games are going to be a massive event, and in this post-pandemic era I think we’re all excited to see the Games back to their glory, where there’ll be fans in the stands. It’s really exciting, but it’s obviously very complex. Paris is a giant, complicated city without an Olympic Games or Paralympic Games, and so bringing that on is going to be really hard. But by working with Intel and trusting with your partners we can help develop the solutions to deliver an amazing Games. And we’re really excited to be a partner of the International Olympic Committee and the International Paralympic Committee to help make these Games the best yet.

Christina Cardoza: Absolutely. Well, I can’t wait to see the Games in action and some of this Intel technology we’ve been talking about. I invite all of our listeners, if you have any questions, or are looking to partner with Intel and leverage some of this technology in your own event experiences, to visit the Intel website and to keep up to date on insight.tech where we’ll be continuing to cover some of Intel’s partners and what Intel is doing in this space.

So I want to thank you again, Sarah, for joining the podcast today, as well as our listeners. Until next time, this has been “insight.tech Talk.”

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Industrial Machine Vision for Manufacturing and Smart Cities

Machine vision applications promise greater efficiency, safety, and profitability—particularly for the industrial and smart city sectors—due to their ability to enhance inspection and quality control.

In factories, for example, automated optical inspection (AOI) can reduce manufacturing errors and increase productivity. And vision-based systems in smart cities can provide safer streets and better urban traffic control. But despite the broad range of potential use cases, these solutions can be difficult to implement.

“Industrial and outdoor urban environments are harsh, making it difficult to deploy industrial machine vision solutions in those settings,” says Kevin Lee, Senior Business Development Manager at Portwell, an industrial computing specialist that manufactures compact IPCs for machine vision applications. “In addition, there are tremendous demands for reliability and some strict space constraints in many industrial and smart city use cases.”

The good news is that modern embedded industrial PCs (IPCs), like Portwell’s WEBS-89I0, offer a computing platform that makes it possible to deploy machine vision solutions in even the most challenging scenarios. Rugged, flexible, and adaptable, these powerful edge computing platforms enable a range of new applications and already deliver value in multiple markets.

Embedded IPCs Unlock Machine Vision Benefits Worldwide

Portwell’s deployments in the EU and APAC regions are a good example of this.

In Japan, a large construction firm was looking for an automated solution to inspect and monitor building projects remotely. The company wanted to achieve technical oversight of field operations without the cost and inconvenience of sending an engineer or technician to the build for manual supervision. But the environmental conditions were challenging, with temperatures on-site ranging from 5C to 45C.

Modern embedded industrial PCs offer a computing platform that makes it possible to deploy #MachineVision solutions in even the most challenging scenarios. @Portwell_US via @insightdottech

Portwell helped the company set up a remote monitoring solution based on WEBS-89I0, its fanless box PC, which could withstand the rigorous operating environment while ensuring the reliability of the system. Cameras installed on-site would help to supervise operations to ensure that proper procedures were being followed and that the project was proceeding as planned, with the IPC doing the preprocessing and then transmitting relevant data to the company’s Microsoft Azure cloud for further analysis.

After implementation, the firm had achieved the level of oversight required, and no longer had to spend time and money sending skilled supervisors to the job site.

In a second Portwell deployment in the Netherlands, a system integrator was attempting to implement a smart city solution for a municipal government. The SI and local officials were concerned about safety and security on the city’s streets, and wanted to develop an automated surveillance system to detect dangerous situations and alert the authorities when necessary.

But due to the setting, the environmental constraints were extremely challenging. Reliability was also a concern, as the potential for equipment damage to an outdoor solution was high, and it would be inconvenient and costly for the SI to send an engineer to repair a computer in the field.

Portwell helped the system integrator develop a machine vision security system using its fanless embedded industrial PC as the edge computing platform. WEBS-89I0’s fanless design was chosen to reduce the probability of malfunction, since PC fans are the component that breaks most frequently when a computer is in constant operation. With this, a network of security cameras was set up around the city. Cameras were connected to the embedded IPC for edge analysis, with algorithms programmed to detect behavior that would raise an alert. The IPC’s built-in SIM card slot also made it possible to route data to a remote control center over local cellular networks.

Once deployed, city officials had their desired computer vision-based security solution—one that would require minimal maintenance and upkeep in the future.

Industrial Machine Vision: Flexibility and Reliability Speed Time to Market

Obviously, major differences exist between roadside traffic control systems, industrial AOI, and smart city safety solutions. The key to an embedded IPC platform that facilitates rapid development of diverse applications is flexible design and reliable, high-performance edge computing.

Portwell’s WEBS-89I0 embedded industrial computer, for example, offers a number of design features that make it easier for engineers and SIs to build for custom use cases.

Multiple USB and Gigabit Ethernet ports enable engineers to connect the WEBS-89I0 IPC to standard hardware devices like cameras; RS-232 and RS-485 ports offer extra connectivity for industrial equipment; and dual output ports provide a way to link the computer to displays. In addition, the computer’s compact footprint—a palm-sized 138mm x 102mm x 48mm—means it can be embedded into almost any solution without significantly increasing the overall size.

On the reliability front, Portwell’s technology partnership with Intel has been of great help in developing its embedded industrial PC. “Intel processors provide the balance of performance, stability, and energy efficiency needed to develop embedded applications,” says Lee. “Our partnership with Intel also gives us early access to next-generation processors, which helps us deliver market-leading solutions to our customers.”

For enterprises and SIs attempting to develop industrial machine vision solutions, this blend of powerful, reliable computing and flexible, adaptable design makes it easier to get to market faster—even when building highly customized solutions for buyers.

Collaboration Enables Wide Range of Industrial Machine Vision Apps

It seems likely that organizations in nearly every sector will look to incorporate computer vision technology into their operations in the years ahead.

In part, this is because implementation is now easier than ever, as modern AI technologies solve machine vision engineering problems more efficiently than older approaches.

“In the past, something like a factory AOI system for defect detection would have been tremendously complex to build using traditional programming methodologies,” says Lee. “But given the state of AI computer vision today, such a system can be designed and implemented far more quickly.”

In the smart city and industrial sectors in particular, availability of rugged, powerful embedded IPCs should help overcome adoption hurdles.

“Everyone in the smart city and industrial space wants machine vision applications because the business case is so clear,” says Lee. “But until recently, the biggest challenge was finding a suitable edge computing platform to implement those solutions. We believe we’ve overcome that obstacle.”

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.