Exploring Cutting-Edge Manufacturing AI Advancements

The Industrial Revolution changed manufacturing. The rise of computers caused another upheaval of systems and processes. Now manufacturing is being revolutionized again by digital transformation and the advent of AI. This new frontier was on display in Germany this April at the Hannover Messe conference (HMI), one of the first post-Covid gatherings to not just talk about all the new possibilities available in the industrial environment but to showcase them.

At HMI this year were Ricky Watts, Industrial Solutions Director at Intel, and digital-trend expert Teo Pham. They discuss what they saw in Hannover—including ChatGPT— what it means for the manufacturing space—yes, ChatGPT in the manufacturing space—and how even the most exciting tech innovations could be pretty pointless if they’re not simple for users to implement (Video 1).

Video 1. New industrial opportunities, tools, and technologies coming to the factory floor. (Source: insight.tech) 

Based on what you both saw at the event, where do you see manufacturing heading?

Ricky Watts: In terms of technology, I think there are three areas that excite me and, to some extent, concern me a little. Some of the larger companies in this space are already using this technology to make manufacturing more efficient. To be honest with you, I was really surprised to see how much it was out there, and how advanced it was.

Another thing is this 3D reality—omniverse, metaverse, things like that—how immersive technology is going to be used in the future. Can I design and build a factory to change outcomes in a 3D virtual reality? And then use ChatGPT and AI to create digital twins that can create physical realities in manufacturing as well?

The last thing I noticed was a lot of robotics at the show. Robotics is everywhere in manufacturing, for good reason, of course—for the logistics and the repetitive tasks we often see in manufacturing. But one thing that was particularly interesting to me was robotsbuilding other robots to drive outcomes. One robot is given a task, and then another builds that first robot to drive the outcome of the task. The second robot uses AI to learn what it needs to do to send off a command to build or design a new tool for the first robot, therefore optimizing it.

Teo Pham: I was very surprised at the variety of topics and participants at this exhibition. Because you expect robots; you expect hardware manufacturers; you expect semiconductor manufacturers. But then you also had software companies, consultancies, and cloud-service providers. It just goes to show how varied this whole space is, and that it’s a lot more than just physical devices. This is how you create exciting new applications.

Ricky mentioned the metaverse. I saw companies like Siemens and Microsoft that were promoting things like the industrial metaverse, creating new technologies that make production more immersive, but also a lot cheaper in the sense of using digital twins to run these amazing simulations to really test things out in the digital space before needing to create them as a physical unit.

Which AI applications in particular got you really excited for the future of manufacturing?

Ricky Watts: That use of AI ChatGPT was one in particular. In the world of manufacturing, we have this thing called a manufacturing execution system—MES—or a programmable logic controller—PLC. It’s a device or an appliance that basically runs the manufacturing. PLCs have a language that operates and runs with them called 61131. One demo I saw was ChatGPT being used to build that code. Typically, a manufacturing engineer would write that object code, and it might typically take that person weeks or months to do it. ChatGPT was doing it in, I’m going to say, minutes or seconds.

I’m going to stress that it’s early days, what was being done in that demo. They were at pains to point out that there were some mistakes in the code, but it won’t be very long before the accuracy, and the ability to deploy that code directly to those machines, is really going to become relevant. Manufacturing is very much a structured environment, driven around a set of standards. But as we start to go into this new world, the possibility of being able to do that is really exciting.

Teo, where do you think the new opportunities are for manufacturers?

Teo Pham: Coming back to artificial intelligence, it just allows you to speed up processes, making them faster and cheaper. There’s always been a lot of talk about ChatGPT, but there are also AI tools that can generate images or blueprints or videos, websites or applications. I think the cost of generating 80% of a solution will go down to practically zero. But then obviously you will still need some very experienced people to get you from 80% to 100%. There will be some very fancy applications, like AI 3D modeling, for example, but I think even for fairly boring stuff like documentation or translation this will be very helpful because those tasks can be done within minutes.

What kind of processing power do you need to take advantage of these opportunities?

Ricky Watts: AI really relies on data. And once you’ve abstracted it, there’s the learning part and then the inference part. CPUs, GPUs, and also FPGAs are always involved.

A lot of the early use cases of AI in manufacturing have been visual ones: I put a camera into a manufacturing environment to analyze something, and then I train a model around the images I get. Let’s say I’ve got a production line of a bottle with a label on it. I’ve got a camera over those bottles, and I want to know if the labels are on correctly. So images are created around that, and then we would train models, generally in a GPU environment because it requires a lot of intensive processing.

Now I have something that knows what’s good, knows what’s bad. But in a manufacturing environment, I can’t keep learning all the time; it’s too difficult. So then comes the inference stage. I’m using the model, and I want to apply that. That’s where CPUs come into play, because it gets very tactical at that point where it’s very, very close to where the manufacturer is coming in.

The training is done where you’ve got a lot of compute power, typically in a cloud environment. The inference in most cases is done where the manufacturing is, at the edge. So you’ve got CPUs and GPUs, and both have an area of expertise. But what we’re starting to see from an Intel perspective is integrating them. You’ve seen it with some of our new technologies, particularly the latest Xeon® chip, the Sapphire Rapids chip.

“#AI really relies on #data. And once you’ve abstracted it, there’s the learning part and then the inference part. CPUs, GPUs, and also FPGAs are always involved.” – Ricky Watts, @intel via @insightdottech

But now we’re starting to see compute platforms in these environments go from edge to cloud. In these environments there are two sets of data: the video I mentioned, but much more pervasive in manufacturing is time series data. Manufacturing uses what we would call fixed-function appliances—a machine, a robot, a conveyor belt—that are generating data that is not vision data. It could be heat, it could be pressure, it could be vibration, and so on. That type of data is optimized to run on CPUs at the edge as well. So you can do the training and the inference at the CPU, at the edge, where data integrity and data sovereignty are becoming very important.

On the CPU side I mentioned Sapphire Rapids. We’ve also got a new portfolio of GPUs coming out. It’s early days for Intel in that space. But we’re learning fast, and we’ve got more products coming out over the next few years. I think for us it’s going to be about integrating the hardware solutions, and then on top of that providing a uniform architecture for developers in the AI space to access those technologies, and we’ve built a number of toolkits and optimizations around that.

Irrespective of CPU, GPU, or FPGA, we optimize underneath; you tell us what the workloads are, and then we’ll deploy them into the right silicon platform at the edge and provide a uniform capability to take them to the cloud as well.

Can you expand on the benefits that manufacturers will gain from moving to the edge?

Ricky Watts: Manufacturing is a very competitive business—whether it’s physical items, like cars; or processes, like with chemical manufacturers. And the use of data in these environments can offer a very competitive advantage. It’s really about whether they can apply the technology in a business-driven outcome.

It’s very easy in our environment to forget that at the end of the day it’s not about the technology; it’s about the outcome. Go back to my bottle example. If I’m putting through a hundred thousand bottles a day, and let’s say 5% of them are inaccurate, I might be throwing away 5,000 bottles a day. That’s a sustainability issue; that’s a profitability issue. If I can reduce that failure rate to 1%, that has a massive impact on the performance of that factory.

What we need to do in the technology industry is make it easier for manufacturers to consume the technology. Manufacturers want to use it, but they’re not experts in AI, and they don’t always have data scientists. And we’ve got to make sure that it’s accessible for everybody in manufacturing, not just large-scale manufacturers that do have huge departments of engineers and data scientists. We are trying to give them all the easy button.

Teo Pham: When we talk about the implementation of AI, I think one of the decisions we have to make is about whether to do it with edge computing or cloud computing. Obviously there are some advantages to edge computing: It reduces latency. In terms of data privacy, we don’t have to send it to a cloud. On the other hand, we need to invest more in hardware, which can be costly and takes up space.

What are your thoughts on the “edge versus cloud” debate?

Ricky Watts: There are distinct advantages with both scenarios. Getting data into the cloud is very expensive because the volumes of data are massive. There are considerations around regulation, data sovereignty, privacy, security, etc. But there are a lot of advantages to doing training in the cloud and doing inference at the edge. Then as more and more powerful compute comes down to the edge, not only the training but the learning can be done there as well. So in my mind more processing will be going to the edge.

So, what’s coming next for this space?

Teo Pham: People say that we are witnessing the iPhone moment of artificial intelligence. Even before the iPhone came out in 2007, we had phones, but still the iPhone changed everything. Today we can’t even imagine a world without the iPhone, without smartphones, without mobile apps.

Similarly, AI has been around for 50 or even 60 years, but I think we are currently in this kind of virtuous cycle: We have lots and lots of data; we have the necessary compute; we have the models; and we have very easy-to-use interfaces like ChatGPT. So much progress is being made that maybe even in six to twelve months the whole space could be unrecognizable. We’re in for a pretty fun ride.

Ricky Watts: Ultimately manufacturing has to produce goods. So what I see is that manufacturers are focused on the new technologies, but they also need to make sure that the manufacturing environments they’ve got today are going to be there for the next few years.

Here at Intel, we’ll go on making sure that the manufacturers are generating the goods we need; and, if it’s energy, that the lights are kept on. We want to make sure that the transformations to come are smooth and integrated, and that there is as little disruption as possible.

Related Content

To learn more about smart manufacturing, read Hannover Messe 2023: The Next Phase of Smart Manufacturing and listen to How Smart Factories are Revolutionizing the Industrial Space. For the latest innovations from Intel, follow them on Twitter and LinkedIn, and follow Teo on Twitter at @teoAI_.

 

This article was edited by Erin Noble, copy editor.

The Future of Telecom: Open RAN and vRAN Take Center Stage

Everything is connected. But that may have been truer during this year’s Mobile World Conference than anywhere else! As might be expected, mobile and connectivity were popular topics, especially on the lines of Open RAN and vRAN. But what do those terms actually mean? How are they connected? (How are they not.) And how do they connect with other technological innovations—those hot off the press, and those still in the future?

We talk to two people who were there at MWC, and who have a lot of expertise between them in the mobile and connectivity space (Video 1). Randy Cox is the Vice President of Product Management Cloud and Industry Verticals at mission-critical intelligent systems software provider Wind River; and Brandon Lewis is Editor-in-Chief of Embedded Computing Design.

Video 1. How Open RAN and vRAN is providing interoperability, performance, and reliability to the telecom industry. (Source: insight.tech)

What are some of the trends around Open RAN and vRAN?

Brandon Lewis: Networks have become incredibly complex systems over the course of time, and so network equipment manufacturers have been supplying these highly integrated proprietary solutions—a lot of different components that make up a network, with a lot of accelerated, specialized hardware and software stacks. And as we move into 5G—which is about providing more bandwidth and higher capacities and really pervasive connectivity everywhere—that really needs to be able to scale.

This is where this concept of Open RAN (Radio Access Network) comes in. It’s really designed around using commodity hardware, commodity servers and platforms, and open interfaces so that you can put different software on top of it. And then the vRAN, or virtualized RAN, part of it runs a lot of those specialized functions that used to be in hardware as software functions. With it, you’re going to be able to scale your networks much further, and get a lot more flexibility out of the stack and a lot more players into the ecosystem.

Randy Cox: But in one sense, vRAN is not about ORAN. The existing incumbents, for example, could do a virtualized network today without it being an open network. ORAN is an architecture, a disaggregated network where you have open interfaces that multiple vendors can participate in and then serve those different network elements. It just so happens that the O-RAN specification does include a virtualized RAN.

As Brandon said, traditional vendors in the telecom space, like Nokia and Ericsson, have typically provided custom hardware, custom software—proprietary equipment for the carriers that provided a very specialized and costly solution, and which required the carriers to stick with those vendors for longer periods of time. And Open RAN really disaggregates the network and allows new players to enter the market, which drives down cost and drives up innovation and flexibility in terms of picking the best-in-class type of suppliers.

Then the O-RAN Alliance is an organization for the ecosystem where all of the vendors and operators that want to, participate: to define the spec, to do plugfests, and to align in different activities in order to proliferate an ORAN-type architecture and accelerate it into the market as soon as possible.

And this focus on vRAN and ORAN is mainstream now; it isn’t just investigation or feasibility analysis any longer but real planning for execution. This means more detailed customer and partner discussions and plans, and more RFPs being executed at this time. If you were at MWC, you would not have missed the focus on vRAN and ORAN.

How is Wind River addressing telecom’s need for scale and flexibility?

Randy Cox: One is our single-core capability on Sapphire Rapids, the 4th Generation of the Intel® Xeon® platform. Prior to December of last year, our cloud platform solution basically took up two cores on a single server. We optimized that down to a single core, which is obviously a 50% reduction in terms of the resource usage on a given server. It has great capabilities for the application or workload that’s being performed on our platform.

The second topic, which is also a very hot one in the industry right now, is around energy efficiency. We’ve been working very closely with Intel, as well as a couple of other partners, to reduce the amount of power consumption being used at a cell site. We are now stepping into the next phases of bringing this into commercial capability in the second half of this year. So, we’re actually able to manipulate and change the C-states and P-states of the CPU itself in order to optimize and reduce the amount of power consumption being used at the cell site.

I think there are six different levels of C-states in a processor. One end basically being at full power, and the other end being the lowest power consumption possible, based on the use case of the cell site. We can change the C-states and P-states for the application as needed, reducing the power, say, between the hours of 3:00 a.m. and 6:00 a.m., when a cell site might get very little usage in a lot of places. We can work with the RAN software to monitor and determine the amount of usage or number of users on the cell site, and be able to reduce the number of C-states or P-states to reduce the power during that period of time, thereby really lowering the total cost of ownership for operators.

“While we’re still getting traction in #5G and getting those commercial deployments, we really want to be able to help the industry and the ecosystem accelerate #ORAN so that we are set up for #6G.” – Randy Cox, @WindRiver via @insightdottech

Tell us about some of the new capabilities in the 4th Gen Intel® Xeon® scalable processors.

Brandon Lewis: Every new release of an Intel processor comes with performance improvements, right? I think the 4th Gens have up to 60 cores—something insane like that. There are also a bunch of accelerators that have been integrated with this new generation of processors. One of them is Intel® QuickAssist Technology, a cryptographic-workload offload so that your CPU cores don’t get bogged down.

Another one is a dynamic load-balancing feature. In software-defined networking you have a load balancer, which is a piece of equipment that basically spreads traffic workload across the different equipment so you can packet process efficiently and not get a bunch of lag and buffer and latency, which obviously impacts the performance of the network as a whole.

The dynamic load-balancing feature on the 4th Generation Xeon processors basically treats the chip the same way that load balancing would, but at the network level: You’re spreading the workload of packet processing across the different cores and across the memory of the chip so that you’re not going to be subjected to any bottleneck spike. Imagine the chip as sort of like a microcosm of the network as a whole in the way that workload is balanced.

A third feature is Intel® vRAN Boost, which really speaks to what Randy was saying before, in that it optimizes the processor for vRAN workloads so that you basically get twice the performance for the same power consumption—or half the power consumption to get the same performance as before, if you prefer. And the game is all about reducing cost, because power consumption is a massive cost for these telco networking data centers. So the more you can optimize around how much power you’re using—whether it’s through P-states and C-states, or on the chipset itself through features like vRAN Boost—you’re going to win. 

How can you leverage expertise from partners to be successful in this space?

Randy Cox: I think by definition ORAN is really fostering the environment of partnerships: The definition of ORAN is to provide more capabilities by more vendors. Wind River has a number of great partners that we’re working with now; for instance, we work with Intel and Samsung very closely on a weekly and daily basis. This is critical, because Wind River Cloud Platform finds itself in the center of the stack.

On the one hand, we have to integrate in the southbound direction with the hardware—Intel, as well as any of the server manufacturers, such as Dell, HPE, or whatever that COTS hardware server may be. In September of last year we shipped our first commercially available Infra Block product through Dell; it’s basically the COTS hardware server along with Wind River software integrated as a single product.

We established this relationship with Dell where we have a complete stack between the hardware, the accelerator itself, and our software; it’s fully integrated, fully tested, and works out of the box. The only thing that needs to be integrated then is the actual RAN workload that would happen with the customer in the field. We’re really trying to make this as easy as possible in this ORAN environment.

But we also have to integrate in the northbound direction, with that RAN workload—or any other workload. On the RAN-workload side we have a strong relationship with Samsung; we have a partnership with JMA; we’ve integrated with Mavenir. Right now we are establishing relationships with Ericsson and Nokia as well.

What key takeaways or thoughts about the future would you each like to leave us with?

Brandon Lewis: It’s really important that everybody check out the cool new features that are available on the 4th Gen Intel Xeon scalable processors. There are also a lot of enabling tools available to developers in the ecosystem, like the Data Plane Development Kit. And we write about this topic often at both Embedded Community Design and insight.tech.

Randy Cox: I’m really pleased that Wind River has made as much progress as we have in this space. But while we’re still getting traction in 5G and getting those commercial deployments, we really want to be able to help the industry and the ecosystem accelerate ORAN so that we are set up for 6G when we get there. There’s tons of work to do on 5G, no question about it, but 5G in this vRAN/ORAN environment is really setting things up for a 6G environment.

And for anyone who’s been doubting ORAN, or is somewhat skeptical about it—it’s real. And Wind River is an example: We’re performing well and deployed in commercial service at scale. I’m looking forward to enabling the rest of the industry to really move forward in this space.

Related Content

To learn more about Open RAN and vRAN, read MWC 2023: Where IoT Networking Meets the Intelligent Edge and listen to The Trend Towards Open RAN and vRAN: With Wind River. For the latest innovations from Wind River, follow them on Twitter and LinkedIn; and follow Brandon at @TechieLew.

 

This article was edited by Erin Noble, copy editor.

ADMS Microservices Fuel the Distributed Energy Landscape

Picture everyday highway traffic. When vehicles move along at the same speed, the traffic is predictable and flows a lot easier. But changing the speed, direction, and variety of vehicles can easily create gridlock.

A similar pattern of disorder is unfolding in the electrical grid system. Today’s grid is teeming with complexities. A new landscape filled with unpredictable events and load requirements is creating the need for new capabilities.

Climate events increase the risk of lasting power outages and damage to already aging infrastructure. Smart buildings, city infrastructure, and residential homes, equipped with solar panels, are creating two-way traffic—consuming power while also generating electricity that flows back into the grid. And the electrification of everything, including vehicles, means increased unpredictability of demand and loads.

Advanced Distribution Management Systems Microservices Empower Agility

This new distributed energy landscape is keeping utilities operators up at night and is making grid visibility a valuable currency, says Carlos Mora, Grid Control Product Manager at Minsait, an Indra company. Managing decentralized generation and consumption of electricity necessitates increased transparency of grid operations through advanced distribution management systems (ADMS), he adds.

Minsait’s modular ADMS solutions deliver the much-needed transparency and agility that utilities need in the form of microservices, Mora says. These microservices enable the generation of business solutions in the form of a suite of small applications, each executing its own process autonomously but in coordination with the others.

Minsait offers a whole suite of modular microservices applications—from grid optimization, monitoring and performance to demand forecast and more. Many of the company’s microservice solutions operate in the cloud and utilities can pick from a menu of desirable options, suited to the biggest challenges they face. Bite-size microservices offer an additional advantage: They allow utilities to apportion financial resources where they’re needed most.

Minsait offers a whole suite of modular #microservices applications—from #GridOptimization, monitoring and performance to demand forecast and more. @IndraCompany via @insightdottech

Modular ADMS in Action

A European utility, for example, had to improve their fault location, isolation, and service restoration (FLISR) capabilities, especially since revenues are directly tied to the number of outages: The greater the number of outages, the less the revenue. Instead of upgrading the entire ADMS, Minsait came up with the solution of upgrading only the FLISR module, which could operate in parallel along with the main system. With the modular solution, the utility was able to decrease the number of times the system was in permanent fault (out of service for more than three minutes).

Mora has been pleasantly surprised at customer willingness to embrace bite-size modular microservices. “When we are breaking something into pieces that has been traditionally monolithic, especially when it’s dealing with critical infrastructure, it’s reasonable to be afraid, but we have seen utilities open to these sorts of solutions,” Mora says. After all, a modular approach to ADMS helps deliver efficiencies that utilities are looking for without having to rip and replace entire systems.

AI and the Grid

A modern ADMS does a lot more: Much like popular apps divert traffic to side roads in the case of high congestion, the ADMS can change the “topology” of the grid to accommodate rapidly fluctuating supply and demand conditions.

The underlying assumption is that the grid is capable of handling these varying loads and demands; it just needs help in diverting that traffic so no one section buckles under pressure. The monitoring of loads to ensure smooth operations is also becoming more dynamic: It used to be that grids did a calibration check every hour, but now that window has narrowed to a mere 15 minutes, Mora points out.

Such rapid orchestration calls for autonomous operations under the direction of AI. Intel empowers Minsait to conduct AI inference at the edge using the Intel® OpenVINO toolkit.

“You need to have a certain level of grid automation so that those applications can not only take decisions but actually operate the grid autonomously,” Mora says. “There’s a lot of data being aggregated that needs to be analyzed and optimized, which is what AI does best. AI is able to predict more accurately, and in real time, what customer demand or electricity generation will do to loads. It’s definitely the way to go to improve forecasting.”

When handling a lot of consumer data, cybersecurity is a concern for business operations. “The Intel® Trusted Platform is the answer to secure the identity of the elements that participate in grid operation (Identity of Things). This secures communication between distributed devices that are prone to cyberattacks,” Mora says.

The decentralized grid is an essential part of sustainability. “Most of our solutions can lead to a greener tomorrow in the sense that they allow for more renewable energy to flow into the grid,” Mora says. And using an AI-based ADMS system with modular microservices enables utilities to be prepared for the complexities of the future.

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Smart Restaurant Technologies Optimize QSR Operations

Faced with rising labor costs and an acute shortage of workers, today’s quick-serve restaurants are struggling to maintain their legendary efficiency. Challenges are exacerbated as orders flow in from new mobile apps and delivery services, making kitchen operations more complex. There is little time to train staff or monitor equipment, and service delays risk alienating customers in a highly competitive market.

To alleviate some of the pressure and improve customer satisfaction, many restaurants are adding self-service kiosks and digital platforms connecting front-end ordering with back-end operations. Lightning-fast order transmission and automated sorting can save workers time and help them avoid mistakes. And by collecting and analyzing restaurant information, managers can gain a better understanding of customer preferences, helping them create more effective promotions and make better stocking decisions.

QSR Automation Boosts Self Service and Lowers Costs

Quick-serve restaurants have experimented with self-service kiosks for years, but they gained traction during the pandemic, when customers displayed a preference for fast, contactless ordering and payments.

“In the past two or three years, we’ve seen a huge move to kiosks and digital platforms,” says Terry Wu, Senior Sales Director, of Prox Systems Co., Ltd., a division of Protech Systems Group that provides hardware, software, and servicing solutions for the restaurant and hospitality industry.

Kiosks display menu items on a digital touchscreen, giving customers a variety of contactless payment options, including merchant loyalty cards and third-party systems like Apple Pay and Venmo, as well as credit cards, debit cards, and cash.

With the Prox Restaurant Smart Service Solution, all orders—whether they’re from a kiosk, the service counter, the drive-through, a restaurant or delivery company app—are routed to a single digital platform. They are quickly analyzed by an Intel processor-based computing platform and sent to a digital screen in the kitchen. Automated order transmission and sorting avoids errors and gives staff instant instructions for achieving maximum preparation efficiency. When orders for in-store pick up are ready, employees place them in digital lockers, where customers can retrieve them by scanning a QR code (Video 1).

Video 1. QSR customers can place orders at a self-service kiosk that transfers information to kitchen staff, who place prepared food in digital lockers for pick-up. (Source: Prox Systems Co, Ltd.)

The Secret Sauce: Quick-Service Restaurant Software

Information about transactions and menu selections is sent to the cloud, where managers can analyze it to discover trends. Restaurants can also install computer vision cameras at the kiosks to learn more about customer demographics, Wu says. All sensitive information, including facial images, is removed before data is transferred to the cloud.

By analyzing customer food selections, managers can make better menu decisions, culling unpopular items and stocking more ingredients for the food people want. A dashboard can integrate information about holidays and weather forecasts to help stores predict demand and better allocate staff.

Analytics can also help stores fine-tune promotions, which can be displayed across customer touch points, including the kiosk screens. Screens alone increase the chances for upselling success, Wu says.

“In our experience, #kiosks generate 20% more upselling revenue than counter sales.” – Terry Wu, Prox Systems Co. Ltd. via @insightdottech

“Customers feel less pressure standing in front of a sign than responding to someone at the counter. In our experience, kiosks generate 20% more upselling revenue than counter sales.”

To relieve busy employees from the burden of managing equipment such as kiosks, printers, and digital signs, restaurants can connect it to Prox Eye, a cloud-based monitoring software platform.

“Midsize and small restaurants usually don’t have technical staff. Prox Eye acts as their IT department,” Wu says. Prox technicians can fix machines remotely and install updates after hours. Using predictive analytics, they can often anticipate and solve problems before a disruptive breakdown occurs.

Customizing Smart Restaurant Technology

Restaurants, coffee shops, and teashops use kiosks and digital systems in different ways. Prox can help them decide which technology to upgrade and where to position kiosks for best efficiency. For example, the company worked with a popular fried chicken chain in Taiwan to install kiosks and add an online ordering option. Even with more orders coming in, the restaurant was able to provide faster service with fewer counter staffers—reducing labor by 20 to 30%, Wu says.

Managers can respond faster to changing customer preferences more easily, updating menus on digital screens at dozens of stores with a click. Digital screens also eliminate printing costs and make menu management easier for employees.

They can even help restaurants introduce new products. For example, the chicken restaurant chain operates a separate network of bubble tea shops. Managers wanted to sell teas at the restaurants as well, but efforts faltered when kitchen staff, accustomed to preparing fried chicken, had trouble remembering complicated drink recipes. Sales are improving since Prox created detailed instructions that appear with drink orders on kitchen displays.

As technology evolves, Wu believes restaurants will discover new ways to use digital applications. In the next two to three years, he envisions kiosks replacing touchscreen menus with voice-enabled chatbots powered by generative AI systems, like Open AI’s Chat GPT or Google’s Vertex AI.

“Customers won’t need to use a screen. They can just ask, ‘“What is the spice level of the curry? What are the ingredients?”’ Wu says. As innovations like these draw customers, he believes restaurants’ appetite for digital and self-service systems is bound to grow. “This is a large market with enormous opportunities.”

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

The Impact of Virtualization on Video Analytics: With BCD

Did you know the average bare-metal physical security hardware system uses only 60% of available CPU resources and 45% of memory? As more industries transform their video camera systems into intelligent solutions capable of providing valuable insights, they could be underutilizing their resources.

Fortunately, by adopting a virtual architecture, businesses can optimize their system resources by consolidating them within a virtual environment. This approach not only maximizes resource utilization but also reduces reliance on physical hardware, sometimes even by up to 50%.

In this episode, we delve into the profound impact of virtualization on the video analytics field. We explore strategies to overcome implementation challenges and discuss why businesses should seriously consider making this transition sooner rather than later.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guest: BCD

Our guest this episode is Darren Giacomini, Director of Business Development at BCD, a video storage solution provider. Darren has more than 16 years of experience in the IT field, specifically designing, implementing, and troubleshooting LAN and WAN infrastructure. Prior to his current role, he was the Director of Networking for BCD and Avaya, and a Senior Network Systems Engineer for Pelco.

Podcast Topics

Darren answers our questions about:

  • (2:24) The evolution of physical security systems
  • (4:06) Challenges with obtaining valuable video data analytics
  • (7:44) The role of virtualization in providing valuable insights
  • (12:20) How to successfully make the move toward virtualization
  • (15:17) Lessons learned from others in the industry
  • (17:42) Partnerships backing the move toward virtualization
  • (20:50) The future of virtualization for businesses

Related Content

For the latest innovations from BCD, follow them on Twitter at @BCDvideo and on LinkedIn.

Transcript

Christina Cardoza: Hello, and welcome to the IoT Chat where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re talking about virtualization in the video space with Darren Giacomini from BCD. But before we get started, let’s get to know our guest a bit more. Darren, welcome to the IoT Chat. What can you tell us about yourself and what you do at BCD and what BCD is?

Darren Giacomini: So, BCD is a manufacturer of servers and storage and networking equipment for the physical security industry. And we specialize in taking what you might consider like a standard-production Dell server, and really kind of putting the modifications into it to actually make it adapt to the visible-security market.

So, despite the fact that you can get servers and you can get storage and works stations and whatnot from multiple locations, unfortunately most of those are more often than not catered to the IT environment—meaning that in an IT world you have a lot of data that’s stored in a repository or data center somewhere. And while that may be a high-availability application, or that may be something that you want to make sure that the data’s always available, it’s also a complete reverse paradigm—meaning that the data’s going outward to selected and few individuals that request it at any given time.

So when you build your hardware specifically for that, the things that ship from the major manufacturers are set up for that type of environment. Unfortunately, physical security, you have hundreds and thousands of cameras and IoT devices simultaneously bringing data inward. What we do is we specialize in redesigning and re-implementing those particular devices to make sure that they’re optimized for that type of application.

Christina Cardoza: Great. And I’m sure this is where a lot of the talk around the virtualization conversation is going to come in, but I wanted to start off the conversation talking a little bit about—you mentioned it, when you have physical security you have all this data coming in from all of these different camera points, and I think the use of cameras has just evolved so much more, even beyond the security aspects for businesses. So I’m curious to see what are the trends you’re seeing or what is the importance that BCD really sees organizations and businesses utilizing and benefiting from video camera systems and these analytics today.

Darren Giacomini: Well, they’re getting—they’re becoming incredibly powerful. People are building analytics in to not only count objects, look for objects that have been left behind. We also see a lot of things where they’re starting to use these cameras to set up IoT devices at the edge in smart cities, where they can actually look at parking stalls and parking lots and determine what lots or what spaces are open or empty.

And now you’ve got smart cities where you can open up your smartphone instead of driving around endlessly looking for a place to park your car. It will say, “Oh, by the way, there’s two 300 feet up on the right.” Or “No, you need to go down one level in the parking garage, and there’s five here.”

So, cameras can pull a lot of that analytical data and a lot of that metadata in to be analyzed. There’s a lot of applications we’re seeing for cameras specifically in IoT devices that reside at the edge of networks.

Christina Cardoza: Yeah, absolutely. And going back to that point you made in your intro, where you have all of this data coming and going into different places, and now it’s becoming even more important for businesses to be able to extract value from the data from what the video cameras are capturing in real time and make sure they can understand what are the false positives versus what’s actually actionable or they need to react to, I think that just amplifies the data—so much more of it going to different places and being able to get to the people and businesses faster. So, what—can you talk a little bit more about that data challenge you introduced with the company in the beginning? What are the problems that businesses are facing with their camera systems, especially now as their use cases grow beyond just physical security?

Darren Giacomini: Well, it’s twofold. I mean, number one, when you start talking about hundreds if not thousands of cameras, and you’re seeing a trend where people are starting to expand their retention periods—and retention periods meaning how long do you have to keep the video at full-frame rate. And in some cases you may be talking about seven days; some people keep it for thirty days; some people want to keep it, in some of these correction facilities, for two years. Think for evidentiary value.

When we start talking about holding that high-quality of video for that time frame, you’re talking about an enormous amount of storage petabytes, and petabytes of storage and analytics come into play with that. When you look at smart cities and other things that may not have a hundred, may have thousands of cameras put throughout the city, and if you’re having to store all the data from all those cameras all the time for extended periods of time, of 12 months or 24 months or even longer, it can become not only incredibly expensive but difficult to maintain. You are going to have drive failures. You are going to have things that happen in trying to retain that data.

So a lot of what you’re seeing in the movement with what we’re seeing with 5G networking and IoT devices at the edge, people are trying to push the decision-making out to the edge to determine what’s important for video and what’s not. A little known fact is in most cases you’ll see that maybe only 5% to 10% of the video that’s recorded is ever used. So the rest of that is just being stored places where we have these large volumes of data that we have to sift through, and analytics and whatnot can help us find things quicker.

For instance, I can run a search over two years’ period of data that says I want all white trucks that went this direction during this time frame on any day. And you can pull that video back and look at it and see based upon an analytical analysis. But the idea of doing that at the edge for 5G is, if we can determine what’s important and what’s not important, then we don’t have to store everything. So I think analytics is going to play a huge, huge, huge part in trying to scale back the massive amount of data and resources that we’re currently seeing.

And I think the approach is going to change. Today it’s: keep everything, everything has to be kept for two years. And we’ve seen throughout the years people have changed the rules a little bit. Everything has to be kept at 30 frames or 60 frames per second, 1080p for maybe six months. And then we can drop that frame rate down and go down to 15. But what we can’t do is drop it below the thresholds for evidentiary value in municipal courts, in if you bring video into a court system and it’s used as evidence it has to meet a certain threshold to municipalities—meaning certain frame rates, certain resolutions that you can actually identify the people that you’re looking at.

So we have big changes on the horizon, and I believe that analytics are going to play a big part in helping us get some of these massive amounts of data under control.

Christina Cardoza: Yeah, and you make a great point that it’s not just the real-time analytics that businesses are getting. They want to store this data so that they can—the historical data—so they can predict patterns or see how they did year over year and make—gain more insights outside of just the real-time analytics that they can get. Which can, like you mentioned, cause a storage issue that can be expensive to store all that data, and then it can be slow to be able to access that data.

So I’m curious, we mentioned that we’d be talking about virtualization in the beginning of this podcast. I’m curious what’s the role of virtualization in easing some of this congestion—some of this storage issue in this video data–analytics space today.

Darren Giacomini: It comes down to the—when you look at, in particular, virtualization, it’s utilization of your resources. So, in a typical physical security environment you’re going to have cameras that reside at the edge, or IoT devices or sensors or things that are bringing data back. You’re going to have a network infrastructure—whether that be backhaul wireless, or whether that be hardwired network infrastructure—that’s going to bring that back to a centralized or decentralized point of recording where we store it for a certain amount of time frame.

And then in some cases you may have second-tier storage behind that, where you have a primary high-performance storage tier, and then you have a secondary, lower-performance storage tier. But, regardless, you have to get that data back to those storage tiers. And then at the same time you have people who are responding in real time, people who are actually watching these cameras, that are pulling them up and need to see a particular event. You’ve got a car accident, you’ve got road blockage, you’ve got this and that. You need to be able to pull that video up in real time and actually see what’s going on there. And, in any of those, that requires taking that data and either bringing it directly from the camera, or redirecting it through the server out to the workstation—all of that utilizes resources.

But if we analyze specifically the most important segment in there, and what people are most concerned about, it’s where we store the video. And when we store the video, these are nothing more than servers. Maybe they’re backended by a SAN or a NAS for more retention on the backend, but they’re servers, and servers have finite resources. You have CPU, you have memory, you have network resources, you have things in there that drive the horsepower, the efficiency of that particular device. And, on average, only about 55% to 60% of the CPU cycles are used on any given archiver.

So when you’re doing a bare-metal server approach—when you’re buying a server and you’re putting it in there and you’re not virtualizing—you may be leaving 40%, 45% of the CPU cycles in cores that are allocated to that server unutilized. And it has nothing to do with the server’s capability itself. It may have to do with the fact that you’re running on Windows 2019 server or Linux or whatever you’re running on, and you can only load one instance of that software application on there. And if that instance that can run on that operating system only utilizes 60% under max capacity, the other 40% becomes unused. There’s really nothing you can do with it.

So virtualization allows us to add in basically an abstraction layer like VMware, ESXI, or Hyper-V or Nutanix—one of the many platforms that are out there. And it allows you to put that abstraction layer in and take the hardware and make them a pool of resources for that particular device. So now you virtualize those machines—what would be a bare-metal server as an archiver, you virtualize that into a flat-file structure, and you virtualize maybe the directory servers into a flat-file structure, or you virtualize whatever other entity for access control or whatever else is in there. You virtualize that into a flat-file structure that can be stored somewhere into a common share point.

And when you do that you have the ability to create more than one instance on that machine. So instead of running one instance of Windows 2019 server maybe I run five, and I divide the resources up amongst that machine, and I can take that CPU in memory that wouldn’t traditionally be utilized and actually more effectively utilize that and get more production out of what I bought.

So naturally you’d think for a company like BCD, where we sell high-performance servers into this market, that would be something we don’t want to happen, but it’s going to happen regardless. Virtualization has been in the IT field for a very long time. It’s penetrated in this side of the market; there’s nothing you can do about it. You have to embrace the fact that people are wanting to do more with less with respect to footprint, and the power footprint and the cooling footprint and everything that goes along with that naturally makes these projects work more efficiently when you virtualize.

Christina Cardoza: So the benefits seem obvious, being able to utilize this virtual architecture and maximize all of the system’s resources. The one thing that I’m curious about is, obviously with any move or changes that businesses have to make, there’s always a knowledge constraint, or there are technology constraints, or there’s just not a clear path on how they do this.

So I’m curious, from your perspective, how can businesses successfully move to virtualization, or how set up or ready would you say they are to make this move for those who have been relying on just bare metal and some of these traditional approaches you mentioned?

Darren Giacomini: I think we’re embarking on a journey that’s—it’s very similar to what we already went through when we saw the shift from analog to digital or IP. And in essence what happened is there used to be analog matrix base with analog cameras, and physical matrix switchers that would switch the views for you to look at these cameras. And when I first came into this market that’s what was predominantly out there, and I worked for Pelco at the time; it was all matrix-based switching, it was everybody was utilizing those, and IP digital was starting to emerge in the market, and we could see that it was coming like a freight train.

That the fact of the matter is things were going to go IP digital; the days of a matrix-switching bay were just not efficient. While they were, they were very good at what they did. The IP digital was catching up to it. And so during that time frame of seven, eight, nine years I did a lot of training and discussing with integrators about skill sets and how they were going to have to modify and change. And now almost everything we push out into the market is digital.

I think we’re going to have a lot of the same growing pains, where you’re going to have people that have a particular skill set today that they’re used to, and they’re going to have to modify their approach. If they don’t modify their approach where they’re really going to feel it is in their pocketbook.

Because the fact of the matter is, if you’re quoting eight servers and I can do the same thing with three, I’m going to outbid you—I don’t care how much VMware costs. Three servers versus eight—even if we’re close or even if I’m a little bit more, when I go back to the interested parties or the people who are making the decisions specifically on the financial side, and I tell them, “Okay, take a look at the total cost of ownership of running eight servers versus three. Okay, this is going to be more efficient, this is going to take up less real estate in your data center, this is going to take less power to keep it cool.” And all these things come into play.

And I think there’s going to be a little bit of a struggle, like we saw in the movement from analog to digital, and you are going to have some people that unfortunately drop out of the market because they don’t want to make that transition. But I don’t think it’s going to be as big. I think everybody’s used to working with Windows now; everybody’s used to working with servers. It’s just that next step of learning that, instead of plugging into the front of a server, I’m going to get into a webpage that’s going to represent that server, and that’s the virtual machine that I’m going to work off of. It will be an educational experience, and people are going to have to take their employees and send them out probably for some training to come up to speed on that.

Christina Cardoza: So how is BCD helping with this educational portion of it, or helping businesses make this transition? Do you have any—we talked about use cases in industries in the beginning—but do you have any particular examples or customers that you can share with us how you guys went in and helped them make the transition, and what the benefits they saw were from moving to virtualization?

Darren Giacomini: Yeah. I’m not going to call out the particular city, but there’s a city on the East Coast where literally their entire municipality is running off of what was our revolve platform, which was a virtualized hybrid hyper-converged approach to things based on virtualization, but also gives you high availability—to where if a particular server failed the VMs move and migrate and they continue to operate. And that literally cut down on the things that they had to do and getting people up at 2:00 or 3:00 in the morning to deal with a server that lost power supply.

You know, when you build a high-availability infrastructure like that, and you virtualize and you see the benefit that it is to those particular customers—meaning that you’ve built this virtualized environment that not only can you build high availability—meaning that virtual machine mobility between nodes is possible, that if I unplug a node all the VMs that ran on that are going to move the other nodes in three to five minutes and start running again—that’s a very powerful thing. Because the fact of the matter is, in the time frame that that happens, they couldn’t even pick up a phone and call their integrator, and the chances are their integrator is not going to roll a truck there until tomorrow. So what do you do in in the meantime? You scramble to figure out whether you have to put people out on the street to patrol, or whether you have to watch things.

There’s some huge benefits that we brought to that particular municipality, in the fact that we gave them a much, much smaller footprint than what they had before—20, 25 servers versus I think 4 or 5 to run everything in a virtualized environment. And then we gave them that high-availability aspect. That means that in the middle of the night when things go wrong you’re not scrambling to send somebody out to site; it’s going to do an automated recovery-feature set for you.

Christina Cardoza: Wow. So it sounds like you guys were really able to come in and make some of these improvements. One thing that keeps coming to mind—and I should mention, the IoT chat and insight.tech as a whole, we are sponsored by Intel—but when we’re talking about all of the storage and memory and compute, and being able to get access to data and real-time analytics and gain value, I know Intel and Intel processors are behind a lot of this, making it happen and making it easier for businesses. And I know BCD has partnered with Intel before, so I’m just curious what their role is, or how the technology, the Intel technology, plays into making some of this happen, as well as if there are any other partners that you work with in this whole move to virtualization.

Darren Giacomini: Sure, we do. In virtualization work with Nutanix; we work with VMWare; we obviously work with Microsoft Windows and Hyper-V. Those are all staples and companies that we work with. But, for the most part, when you start to think about partnerships, from the Intel side it’s been very instrumental in the things that we’ve done, and meaning that if I want to take—in my particular role at the company I work with the professional services team, but I also do new-product development. So, emerging technologies and new products that we develop typically come through my office to actually engineer them, validate them, and make sure that they’re up to spec.

And that’s, I guess, well, I didn’t really hit on it—that’s one of the things that sets BCD up as a differentiator. We don’t just build servers; we really do holistic, end-to-end solution sets. We have expertise in operating systems, networking, sand storage, infrastructure, all the VMS platforms. So when those integrators find themselves in a situation where they can’t find a very simple solution to something, or they can’t figure something out, we’re there to back those integrators up—whether through professional services or giving them assistance to point them in the right direction.

And that also plays part in this virtualization that we’re headed towards. Today most of the industry’s not ready for virtualization, yet my team will actually build it and ship a turnkey for them—meaning that we’ll ship them out a turnkey solution that’s already been put together with a topology, and all they really have to do is plug it in and turn it on.

But that’s where partnerships with Intel comes in. For me to go out and put a request in to get 40 or 100 gig NIC cards to see how far I can push that, that’s going to get shot down before it even reaches the upper-management levels or executive levels in most cases, because they’re not cheap. And also, the fact is it’s an unproven technology. We’re seeing as we, as ESXI or VMware brought in the 40 and 100 gigs, we need to figure out what the choking point is. Is it the actual box itself? Is it the NIC card? Is it the software? Where—what’s its upper bounds? What is it capable of doing? And Intel has been really good about providing resources like these network cards and other resources that we need to do analytics and things of that nature to help push the envelope and figure out exactly what we could do on their platforms. And it’s been instrumental specifically to what my team does.

Christina Cardoza: Yeah, that’s great to hear. And I think it’s important to note: that turnkey solution, that businesses don’t have to have all the expertise or knowledge in-house. You know, there’s partners out there like yourselves that can really make this easy for them and really make it foolproof for them.

You mentioned some of the other services that you guys are hitting within the company, and we’ve been talking about the IoT—all of this is moving to the edge and all of these IoT networks then become created. So, I’m wondering, beyond the physical aspects, where do you see virtualization going, or where else can virtualization help organizations?

Darren Giacomini: Across the board it’s really about utilizing your resources more efficiently. And there are some companies out there that just have so much money they don’t care and they’ll—it’s easier for them to do pizza-box servers or do bare-metal solutions, and they don’t want to get involved with the complexity. But there are certain things you can’t do with that as well.

When you start talking about virtualization, you start talking about the ability to create recovery points and snapshots. And we partner with a company called Tiger Technology that is, in my opinion, is absolutely outstanding at looking at next-generation hybrid-cloud storage—meaning that you’re going to have an on-prem presence, you’re not going to push your cameras right to the cloud, but you have the abilities to get the hooks into the NTFS or the actual file structure inside of Windows and make it an extension of that platform—meaning at any given time I can take backups, or I can take multiple instances of backups and push those out to the cloud for recovery. Or the same can be done locally. But you can’t really do that type of deal in a bare-metal environment.

In bare metal, what were we doing? We had tape carousels, we were running these just atrocious backups that would take forever to back up things to tape, and then storing that offsite, back in the day. Fact of the matter is, VMware gives you the ability to do snapshots, and most virtual platforms do. If I can take a snapshot and I can create a repository of snapshots, when something goes wrong… And, you know, one of the things that’s always high on my list is talking about cybersecurity initiatives. What is your disaster recovery plan? What is your business-continuity plan if you get hit? And the fact of the matter is everybody is going to get hit. I don’t care how careful you are, there are zero-day instances out there that are going to hit you at some point.

And most things that we’re seeing today are ransomware. They’re going to lock or crypto lock your critical resources in your company and say, “How valuable is it to you? And how much Bitcoin do you have? Send us this much, and we’ll send you the key to unlock it.” You don’t find yourself in that scenario when you’re taking a virtualized environment and taking regular snapshots. You can actually say, “This is an acceptable loss. Roll the snapshot back one week; let’s take the one-week loss rather than paying out the crypto. Go in, make sure we’re back up and operational. We’re not crypto lock, and check for it and move forward.”

And to do that in a bare-metal environment is just not realistic. I mean you could, but it—the simplicity of being able to take those snapshots and roll back, and how quickly you can roll back to a different version of a virtual machine is unparalleled in the bare-metal market.

Christina Cardoza: Yeah, absolutely. And it’s important to continue to remain reliable, and just the brand standards that you have, to keep that up to be able to roll back and not really have everything shut down, or not really have continued issues. So I think that that’s also something important to note.

We’re nearing the end of our conversation, but before we go I just wanted to throw it back to you one last time, sort of an open-ended question. Is there anything else about this topic you think our listeners should know, or are there any final thoughts or key takeaways you want to leave listeners with today?

Darren Giacomini: It just really comes down to, if you’re an integrator out there, if you own an integration company, you deal in the physical security market—you can ill afford to ignore the fact that virtualization is coming, and if you’re today you’re only using a fraction of the resources on those servers. The fact of the matter is, there are people who are figuring this out, and when they’re going into these bids and they’re giving a more functional solution, a more highly available solution, and they’re $60,000 cheaper from you or than you, it’s going to be very, very difficult to actually go in and be competitive.

And there are going to be stragglers in the market, and there are going to be places that are not going to virtualize for a very, very, very, very long time. But the large majority of the market, I predict in the next three to five years is going to hit mainstream virtualization, and you either need to get on board with that or you’re going to find yourself in a situation where it’s going to be very, very difficult to be competitive in the market.

Christina Cardoza: Absolutely. And I like that point, the three-to-five year prediction, because I think that businesses and organizations—they’re going to have to make the move, or the transition to virtualization is going to come sooner than they think. And it just sounds like it’s going to benefit them, and this is going to help their businesses succeed moving forward. So, with that, I just want to thank you, Darren, for joining the IoT chat today, and for the insightful conversation. And thank you to our listeners for tuning in today. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

AI-Powered Self Service Checkout Eliminates the Wait

You finally got tickets to see your favorite team play. You slip out ahead of halftime to grab a snack, and while you’re standing in the long concession line, you hear the roar of the crowd inside the stadium. Your first thought: What did I miss? Your next thought: Why is this line so slow? Sure, you can watch the instant replay, but it’s not as exciting as seeing the amazing play unfold in front of your eyes.

Other retail environments, like grocery stores, are reducing wait times with self-serve kiosks, but businesses that sell non-barcoded items—just like that gameday hotdog and beer—must rely on human cashiers. Unfortunately, staffing shortages are increasing customer wait times.

Long lines often translate to lost sales, but now, AI-driven self-checkouts, such as those from Mashgin, a creator of frictionless checkout experiences, are solving the problem by using artificial intelligence (AI), computer vision, and object recognition to perform point of sale transactions.

Long lines often translate to lost sales, but now, #AI-driven self-checkouts are solving the problem by using #ArtificialIntelligence, #ComputerVision, and object recognition to perform point of sale transactions. @Mashgin via @insightdottech

Changing the Game with Self Service Kiosks

Like most innovations, the inspiration for Mashgin (which stands for “mash-up of general intelligence”) came from a personal experience. In 2013, Mashgin Founder Mukul Dhankhar was working on self-driving cars and humanoid robots at Toyota’s computer vision lab in the Netherlands. Every afternoon, he’d grab a salad and soft drink at the onsite cafeteria, but it would take forever to checkout because everyone in the corporation had the same two-hour block for lunch.

Standing in line, Dhankhar thought, “I can fix this.” And he did. Anchored in computer vision and deep learning, The Mashgin Touchless Checkout System identifies and rings up products using multiple cameras. Customers place all their items on the checkout tray. The system, which is powered by Intel® processors, generates 3D images of the products, matches them to the inventory database, and rings up the entire tray in less than one second. Weighed items, like salads, are placed on the kiosk’s scale. Customers pay with their credit or debit card or cash and be on their way (Video 1).

Video 1. AI-driven object recognition creates self-service checkouts that speed up transactions and boost the customer experience. (Source: Andy Peacock)

Items that aren’t already in Mashgin’s database can be easily scanned into the system. “Teaching the machine an item takes about 30 seconds,” says Toby Awalt, Vice President of Marketing for Mashgin. “The different cameras take poses, which are a collection of shots that create a 3D profile. The magic is how few shots you need to go from not recognizing an object to recognizing an object. That data is then sent out to all the other machines in the network, and every machine gets smarter.”

Mashgin deploys the system in as little as 15 minutes. Awalt says that market readiness is important because labor problems are a big issue for retailers. “We literally come and drop the machine on the counter, do an accounting check, and it’s ready to go,” he says.

Once installed, Mashgin operates as a “Checkout as a Service” model. “We take on hardware warranty, support, algorithm updates, and more,” says Awalt. And like other checkout systems, Mashgin collects real-time sales data, which helps retailers track inventory and maintain supply levels.

Smart Retail Solutions of the Future

By lowering average checkout times, high traffic businesses, such as convenience stores, cafeterias, airport vendors, and arenas, can dramatically decrease lines and improve the customer experience.

For example, Mile High Stadium, home to the Denver Broncos NFL team, integrated 30 Mashgin kiosks into its concessions system. With a capacity to host than 76,000 fans, the stadium experienced a 100% increase in overall throughput speed and a 34% increase on sales in its concession areas after the solution was deployed. The median transaction time was reduced to under 15 seconds.

“We know every sports stadium has a line problem,” says Awalt. “We’re helping them drive sales and create better fan experiences. With Mashgin, you can post one person at the front to check IDs, and one person at the back to make sure that any liquids are open, the tops are off,” says Awalt. “In-between them, you can place two to 10 machines that are all doing that huge throughput.”

Awalt says the convenience store chain Circle K is putting 10,000 Mashgin units into their stores to help the address labor turnover. The solution frees up staff members to focus on customer service or take care of tasks, such as stocking shelves or cleaning the store.

“Our machines are two to four times faster than the typical cashier,” says Awalt. “In a convenience store, we can put two machines on a counter and turn one person into five. The employee can say, ‘Hey, these checkout machines can help you over here. If you need a lottery ticket, I can help you on the front register.’”

Named as one of Fast Company’s Most Innovative Companies for 2022 and Forbes’ AI 50 list, Mashgin is helping drive the future of retail, providing a better experience for customers while protecting sales for retailers. Someday the only line in a stadium could be the one to the women’s bathroom.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

AI Biometrics Modernize Public Transit Operations

Urban rail transit is making our cities more livable and more sustainable—but as rail operators are discovering, mass adoption brings growing pains as well. A good example is the case of automated fare collection (AFC) systems based on QR codes and fare cards. Rail operators have turned to AFC to address the challenges brought on by an increasing number of riders such as bottlenecks at ticketing kiosks, overcrowding safety issues, and inefficiency caused by large, disorganized crowds.

But while AFC solutions are an improvement over manual ticketing, challenges remain.

“Fare cards can be slow, and the big problem with QR-based approaches is that they don’t work when there’s poor signal, or when a passenger’s smartphone battery is dead,” says Hukemei, Sales Director at Huaming, a manufacturer of AFC solutions for public transportation. “In addition, much of the hardware used in current solutions is not ruggedized, which becomes a problem when these systems have to withstand extreme temperatures and the powerful vibrations found in many urban rail settings.”

Until recently, there wasn’t a better alternative. But now, AI computer vision techniques, purpose-built edge AI hardware, and AI software development toolkits (SDKs) enable a new kind of AFC based on palm vein recognition. These solutions will make urban rail transit safer, easier, and more efficient. And this same technology will enable biometric identification in other settings as well.

Edge AI Powers Biometric Ticketing Kiosks

Palm vein recognition is high-tech, but its main benefits stem from human physiology. The pattern of veins in a human hand is as unique as a fingerprint and remains stable throughout a person’s adult life. And hands, of course, in contrast to smartphones, do not depend on cellular networks or batteries. This makes palm vein patterns ideal for biometric identification: They’re consistent, hard to spoof, and “always on.”

But to capitalize on these biological advantages requires technological sophistication. Huaming’s AFC solution incorporates several edge AI-powered technologies to provide a comprehensive ticketing system for urban train operators.

Passengers begin by registering their palm vein print at a customer service kiosk. This connects their biometric data to their user information, enabling future palm-based identification and automatic payment at smart ticket gates.

When a passenger wants to enter a station, they simply hold their hand in front of a scanner, which uses near-infrared light to capture an image of their palm vein pattern. An edge AI appliance performs feature extraction, encryption, and compression, and then sends the resulting data to an edge server to check for a match. On average, verification takes about a tenth of a second.

Hardware and Software Built for AI at the Edge

From a passenger’s perspective, palm vein-based identification is as simple as holding out their hand. But behind the scenes, there’s a lot of computational heavy lifting going on.

“A biometric AFC solution has to be seamless for the passenger and highly stable for rail operators,” says Hukemei, “but that requires executing complex computer vision and AI processing tasks, at the edge and at scale, with very little room for error.”

To develop a solution capable of delivering high-performance AI processing at the edge, Huaming leveraged several Intel technologies:

  • Intel Atom® X6000E processors power the edge AI appliance, providing a high-performance computer vision and edge AI processing platform that operates reliably even under extreme conditions.
  • Intel® Xeon® processors handle feature matching on the edge AI server, while the processors’ built-in Intel® AVX-512 instruction set helps to optimize feature matching at scale.
  • The Intel® OpenVINO Toolkit inference acceleration improves the performance of the palm vein feature extraction model by nearly 4X and significantly decreases the inference error rate.
  • The Intel® Feature Matching Acceleration Library is used to achieve the kind of large-scale feature matching required by an AI biometrics solution for mass transit.

The result is a stable, high-performance edge AI-powered solution that performs well under the most demanding of conditions. “Intel processors are an excellent computing platform for AI at the edge,” says Hukemei. “And Intel’s AI acceleration tools and models are a great help in speeding development work and shortening time to market.”

Beyond Urban Transit: AI Biometrics for Other Scenarios

In the coming years, AFC solutions based on palm vein recognition should generate great interest among systems integrators (SIs). The technology is efficient and robust. And better still, it can be implemented on top of existing automatic ticket gate functionality. Huaming’s solution, for example, allows passengers to choose between using old-style fare cards, QR codes, or palm vein recognition. That makes implementation less of an either-or decision for city planners and urban rail operators—and an easier sell for SIs.

In addition, AI biometrics based on palm vein recognition will find use cases beyond public transportation. The technology offers a number of universally attractive benefits: secure, contactless identity verification; a low rate of false recognitions and false rejections; and an underlying computing platform that supports data collection and analytics.

“The possibilities are really exciting,” says Hukemei. “We see applications for this technology in other forms of smart transportation, in smart communities, in smart cultural tourism, and more. It’s going to make our cities safer, healthier, and more efficient.”

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Covering All Bases with Unified Security Solutions

We’ve become so used to filtering the world through a computer or smartphone screen that it’s easy to forget about dangers and concerns IRL. But cybersecurity is not the only game in town. Physical security is still necessary to protect our employees, products, factories—even our wildlife.

We talk with David Trujillo, Sales Engineer at AxxonSoft, a leader in video-management software, not just about preventing theft and damage, but about solving real-world challenges for businesses looking to protect people, property, and assets. He discusses the definition and benefits of a unified security system, the role of AI in that system, and some fascinating—and unexpected—use cases.

What are some features of the physical-security landscape today?

The first thing I would mention is the service approach to providing security solutions using cloud technology. Some examples are video surveillance and access-control systems. But given that security systems themselves have to be on-premises, what does a cloud-based solution mean in that context? A cloud service typically acts as a managing system that collects, stores, and analyzes data from devices. It also manages user rights, and provides access to administration and control-monitoring interfaces.

This is akin to a SaaS approach. For instance, video surveillance as a service—or VSaaS—systems can store video archives in the cloud, and you can have only the cameras installed at a site. Or you can have a hybrid deployment, with cameras and video storage both on-site but the cloud service used for remote video monitoring and system management. A cloud-systems solution offers users many benefits: There’s low upfront expenditure, easy scaling for large-scale deployments, out-of-the-box remote monitoring, and clear cost planning through a pay-as-you-go model.

The other two big trends to take note of are integration and the use of artificial intelligence—namely neural-network video analytics. The first of these, integrated solutions, improves efficiency through more effective configurations that wouldn’t be possible with standalone systems. Video surveillance can be combined with access control, smoke and fire detectors, intrusion alarms, and even building-automation systems. For example, when an alarm sensor is triggered, the video-monitoring software can be configured to immediately give the operator video feeds from nearby cameras, enabling that person to assess the situation quickly and react accordingly. Another example: When the last employee leaves the premises at the end of the day, you can set it so the lights automatically go out and the ventilation switches to a lower intensity. The intrusion alarm can be armed at the same time. There are plenty of scenarios like this, where automation can make facilities more secure, energy-efficient, and cost-effective.

“#IntegratedSolutions improves efficiency through more effective configurations that wouldn’t be possible with standalone #systems.” – David Trujillo, @AxxonSoft_EN via @insightdottech

What’s the difference between integrated and unified security?

They’re often used interchangeably, but in general the term “integration” usually comprises a wider variety of solutions. Let’s consider an example where you have video surveillance and access-control systems. When someone swipes their access card at the reader, the video-surveillance system receives an event from the access-control systems, which then triggers video recording. This way you get recorded footage every time someone passes through an access point. The event itself may contain the employee’s name and their ID number, so the footage can quickly be searched for those parameters. In this example of event-based integration, the systems are independent, with each one having its own user interface, separate configuration, hardware, and so forth.

“Unified” implies a deeper level of integration, where unified software manages all devices—both video cameras and access-control devices in our example. So what are the benefits of that? First, there’s a single interface for video and access control, which is complemented by other features like a 3D map, for example. The access-control database photo of the ID-card owner might be displayed next to the face image captured by the camera along with, for example, their name, their job title, and any other pertinent information. You can grant access manually if the photos match, or you can take appropriate action if there’s a mismatch. You can do all of this from just one interface without needing to switch between windows, and you can monitor technical perimeters like hardware status or system health all in one place.

Are there other benefits of unified security?

Benefits include implementation of new features that aren’t available within self-contained systems. This produces improved situational analysis based on information from multiple sources, while reducing the amount of information an operator has to process. This makes the operator’s work more efficient, which in turn reduces the likelihood of mistakes. Additionally, open-platform solutions allow you to combine equipment from different manufacturers and manage it all from a single control center. This minimizes the cost of equipping the facility by reducing the amount of software and hardware needed.

And this isn’t just limited to security. For example, time and attended systems that are typically included as part of access-control systems can be integrated with corporate accounting, providing for efficient and automated workflow. Traffic-enforcement cameras can be integrated with systems to issue fines for violations. There are plenty of solutions where interoperation has the ability not only to improve security but also to optimize business processes. 

Talk about the role of AI in enabling some of these opportunities.

AI is often used for accurate detection of specific shapes and objects. In the context of security-related applications, the greatest demand for this is detection of human intrusion in a protected area where there’s a large amount of nonrelevant motion such as foliage blowing, rippling water, precipitation, etc. A simple motion detection will produce numerous false alarms when it picks up everything that’s moving. AI helps to filter out false alarms, so operators won’t be distracted by them and can focus only on real threats. When every single possible thing is causing an alarm, operators will quickly learn to ignore them, and they’re going to miss it when something serious actually happens.

How does AxxonSoft ensure data privacy?

In a lot of different regions around the world—for example, in Europe with the GDPR, and in California with the CCPA—there are various local regulations that protect people’s rights in this way. And, of course, AxxonSoft offers features that allow the VMS to be in compliance with those regulations. Various things we can do include blurring people’s faces, or even their whole bodies—AI analytics are able to detect where a person is and accordingly blur only those details, instead of blurring the entire image. We can also block off certain areas of the footage that might not be appropriate to view, and control exactly who is able to see them.

What are some of the challenges businesses face as they add these new capabilities?

Implementing a unified security system is more complex than implementing a standalone one. You’re going to need a more qualified integrator, and possibly custom integrations and functionality enhancements. This is crucial for high-end installations like systems for large enterprises, or in citywide-public safety setups.

I know one concern a lot of people have is “Will this work with my existing cameras?” But that’s something we definitely keep in mind. We want to have our system always work with existing systems, not require people to tear out all their cameras and install new ones.

What are some customer examples or use cases of AxxonSoft making all this possible?

We have a few great examples. For a case of AI human detection, we have several implementations of this technology in wildlife refuges in South Africa, where systematic poaching is a devastating epidemic. Most of the parks where these poaching crimes occur are fenced, but protecting these areas with perimeter security systems and video surveillance hasn’t really been effective. Those systems generated quite a few false alarms because the animals themselves frequently bump into the fences. Security staff could not possibly monitor every single event, and they frequently misread the threats when poachers actually intruded. AI human detection has helped solve this problem by distinguishing humans from animals.

An example of custom-trained AI analytics is detection of personal-protective equipment, used for enforcing workplace safety. It locates individuals not wearing their hard hats, high-visibility vests, or other protective clothing—the system can actually detect someone’s head, torso, and legs independently to check for the appropriate gear. You can integrate that with access control, and the system becomes even more efficient through preemptive detection. The camera can be mounted at an access point, and when an employee swipes their access card, the turnstile will only open if they’re wearing their protective equipment.

Integration of video surveillance with third-party systems is also widely used for cashier operations and supervision at point of sale. The video-surveillance system receives data from cash registers, then links that to video feeds; you can superimpose the text of a receipt, for example, on the video. You can also use data from the receipt—such as product name, price, or transaction amount—to quickly search the recorded footage. It really offers a full picture of what’s happening at the checkout, and could be used to reveal violations that would be almost impossible to detect via conventional video surveillance.

How is AxxonSoft working with other companies in the industry?

We’re in constant cooperation with Intel, as well as other software and hardware manufacturers. But Intel processors are at the core of most security-system servers our clients use. AI video analytics are very resource-intensive, so hardware-AI acceleration is crucial for building cost-effective solutions. We use the OpenVINO toolkit from Intel for computer-vision applications; this maximizes performance by extending workloads across Intel hardware, including accelerators. Our AI analytics can run on both Intel processors and accelerators.

We also apply Intel® Quick Sync Video technology, which is available on Intel processors with embedded GPU. It provides hardware acceleration for video decoding. AI analytics are not the only processes with heavy computer requirements; video recording can be a demanding task, too. So we use Quick Sync on both the server and the client side.

We also collaborate with IP-camera manufacturers to support embedded video analytics and other advanced capabilities, such as smart codec. “Embedded video analytics” refers to cameras that are running their own kind of AI detection, and that kind of detection integrates perfectly with our own. AxxonSoft is a contributing member of ONVIF, an open-industry forum that provides and promotes standardized interfaces for the interoperability of IP-based physical-security products, and we strive to support the newest standards as they appear.

AxxonSoft is an extremely partner-oriented company, and many integrations and functionality enhancements have been made based on partner requests and their specific project requirements. We’re always open to listening to our partners and customers, making for the most suitable solutions for a wide range of industries and applications.

Related Content

To learn more about unified security, listen to the podcast Why Unified Security Solutions Matter: With AxxonSoft. For the latest innovations from AxxonSoft, follow them on Twitter at @AxxonSoft_EN and on LinkedIn.

 

This article was edited by Erin Noble, copy editor.

Software-Defined Processes: The Future of Manufacturing

Industry 4.0 is on a fast path to power manufacturing through open software-defined processes—unlocking previously siloed data and enabling more agile operations.

Technologies help this transformation—key among them is industrial-edge AI—designed to solve specific problems and optimize manufacturing outcomes. AI at the edge leads to “incredible opportunities to transform manufacturing for the better,” says Steen Graham, CEO of Scalers.ai, a company that delivers such solutions to a variety of industries.

The technology is a boon to manufacturing in many ways: AI is a data-heavy enterprise, so conducting AI inference at the edge allows manufacturers to discard a lot of irrelevant data. “Not sending that data to the cloud is a huge value from an economic perspective,” Graham says. In addition, manufacturers can exploit the near real-time aspect of edge AI for a variety of use cases—from defect detection to monitoring production line numbers in real time.

While Scalers.ai takes care of the custom AI software for various manufacturing efficiencies, a partnership with Dell Technologies gives it the performance it needs for reliable and rapid edge AI deployments. The ruggedized Power Edge XR platform checks off all the specifications needed, says Manya Rastogi, Technical Marketing Engineer at Dell Technologies. The Power Edge XR has a short-depth form factor and has passed rigorous tests for shock, vibration, and dust. “It can tolerate harsh manufacturing conditions,” Rastogi points out.

Increasing the Velocity of Edge Deployments

Tolerating a wide range of manufacturing conditions is great, but for edge AI to be effective, the industry cannot afford to reinvent the wheel with bespoke AI models for every use case. Instead, transfer learning or applied AI helps developers add to existing recipes.

Using robust machine learning models that have already been trained on hundreds of millions—if not billions—of parameters is not always practical. Much like the meal line at a pick-your-toppings fast casual restaurant, manufacturers can start with a baseline “rice and beans” program and add custom layers “and conduct a transfer learning workflow or retraining exercise to add just that custom capability,” Graham says. “We’re building on the incredible foundation of models that exist today and users can customize these models to their domain, their use case, and their implementation.”

Such a shortcut not only saves engineering and developer resources, it also helps enterprises deploy edge AI models faster. When hungry customers knock at the door, fast turnarounds for new edge AI deployments can make a big difference.

Software-defined manufacturing gives enterprises “the ability to change their environment on the go to meet customers’ ever-evolving tastes,” Graham says.

Using #EdgeAI also helps decrease #manufacturing downtime through telemetry #data and through machine-to-machine protocols like OPC-Unified Architecture (OPC-UA). @DellTech via @insightdottech

The Future of Manufacturing: Near Real-Time Monitoring

Enterprises not only increase revenue by being the first to meet consumer demand, they also do so with optimized expenditure of time, money, and raw materials.

Here, too, rugged edge AI computing delivers, Rastogi says. Take the manufacturing of impellers, a rotating component used in many industrial processes. Layering on computer vision allows for defect detection in near real-time at the edge, a process that can work at any stage of the part’s manufacturing process. Adding AI-driven defect detection at the molding stage, for example, helps catch defects early before any additional materials or resources are wasted on a faulty end product. Because the program catches problems in real time, corrective action can be taken immediately, decreasing time-consuming and expensive postproduction autopsies.

Using edge AI also helps decrease manufacturing downtime through telemetry data and through machine-to-machine protocols like OPC-Unified Architecture (OPC-UA). OPC-UA helps by relaying what machines on the production line are actually doing, using unlocked data versus human intervention to gain critical insights. Edge AI workloads applied to such data communicated from existing machines is a great example of how manufacturers can squeeze more transparency from previously opaque machines and production lines.

For example, if the tower lights on a production line blink to red consistently, it can signal a problem that needs to be checked right away. And because edge AI can tell there’s a problem as it unfolds, floor managers have a better chance of fixing problems and meeting daily production quotas.

Dell’s Power Edge XR platform incorporates 4th Gen Intel® Xeon® Scalable processors, which allows CPU inference without the need for additional accelerators, Rastogi says. “What’s really unique about the Intel 4th Gen Xeon processor is that it includes instruction set optimizations for AI that you don’t see readily available in an off-the-shelf, general-purpose processor,” Graham says.

“In recent years, the ability to implement deep learning via applied AI has made deep learning more achievable for developers to deploy in manufacturing and even with small data sets,” Graham says. “That has been a game changer. Getting the hardware needed to run these models in a rugged small form factor is also a step forward in making the deployments easier,” he adds. It’s paving the way for a custom AI-driven future of manufacturing.

Which is good news for manufacturers—and their demanding consumers.

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Transform Offline Retail with AI-Powered Digital Signage

Retail’s digital transformation has been long underway with the advent of online shopping. In this digital storefront, consumers have benefitted from personalization, a wealth of product information at their fingertips, and the convenience of one-click purchases. On the business side, it provided companies with cost-effective customer acquisition, unparalleled insights into their sales funnel, and the ability to perform laser-focused marketing.

But one thing these digital transformation efforts have missed is the inclusion of brick-and-mortar locations. And in a time when customers’ expectations are higher than ever—thanks to their experience with online shopping—offline retail stores lag far behind.

“In online stores, retailers need to have audience measurement, targeted marketing, and the ability to offer the exact right product at the right time—or the customer is gone,” says Alex Rekish, Chief Technical Officer at DISPL, a developer of digital signage and visitor analytics software. “But in offline stores, none of that exists. That lack of measurability makes the offline sales process expensive and inefficient—and results in an underwhelming in-store experience for customers.”

As a result, brands and retailers are eager to find new ways to help their offline stores catch up to their online counterparts. And fortunately, with advancements in computer vision, AI-powered digital signage solutions are now possible.

#AI-powered #DigitalSignage has become a powerful tool for #DigitalTransformation because of its ability to combine several cutting-edge #technologies into a single, easy-to-manage platform. @Displayforce1 via @insightdottech

Edge-Enabled Solutions Deliver Insights and Help Optimize Sales

AI-powered digital signage has become a powerful tool for digital transformation because of its ability to combine several cutting-edge technologies into a single, easy-to-manage platform. DISPL’s smart retail solution was showcased at the 2024 Integrated Systems Europe event, one of the largest events for AI and systems integrators in the world, and the display helped attendees better understand its components:

  • Smart media players that control in-store displays. The cross-platform player software can be installed on DISPL hardware or any media player device that broadcasts content such as display panels, LEDs, and so on.
  • Computer vision processing that provides insight into customer behavior. USB cameras acquire images of customers for computer vision processing. An AI-powered algorithm at the edge is used to obtain insights into customer demographics, shopping behavior, and engagement, and show relevant, targeted content to individual customers in real time through in-store displays. In regions with strict data privacy laws, using computer vision and AI at the edge also means that collecting sensitive personalized data—or sending it elsewhere for processing—is unnecessary.
  • A web-based CMS (cloud or on-premises) to help retailers manage content and hardware network, and view collected data. The CMS enables custom content to be displayed to different shoppers or at different times and locations. It also offers a dashboard that lets decision-makers view demographic information, conversion metrics, and customer engagement data to optimize in-store displays and layouts for increased sales.

The combination of these technologies helps retailers and brands transform store displays into powerful, smart tools capable of obtaining customer insights, increasing sales, and improving the in-store experience for shoppers.

AI Digital Signage with Rapid Improvements

Once deployed, AI digital signage can quickly improve sales—and the customer experience. And believe it or not, the reason is because most brands and retailers are already familiar with using these “new” capabilities since they’ve been online for years.

Because of this, their marketing and sales teams know very well how to use digital technology to serve targeted messaging, optimize displays based on customer profiles, and A/B test media. What’s new is that, for the first time, these capabilities are also available to brands in their physical locations. The fact that companies already know how to use these tools online means that offline improvements can happen virtually overnight.

Case in point is DISPL’s experience with a pair of electronics retailers in Europe.

Cyprus-based Acean and the largest Greek consumer electronics retailer Kotsovolos were both looking for ways to implement their online business processes in their offline stores. The companies shared similar goals. Acean wanted to understand its offline funnel and audience and optimize in-store marketing to boost sales. Kotsovolos wanted to use in-store screens for analytics and to display targeted messaging to individual shoppers according to their demographics.

DISPL worked with Acean and Kotsovolos to set up customized deployments of their AI digital signage solution. The results came quickly. Acean turned 44% of their visitors into engaged leads—resulting in a 15% sales increase. Kotsovolos, for its part, saw significant growth in visitor engagement. The company was so impressed that it scaled up its initial trial deployment from 16 to 100 screens after a few weeks—and says the experience helped it transform its thinking about targeted messaging and the role of the physical store.

To make these improvements and help bring its product to market quickly and effectively, DISPL used Intel computer vision technologies.

“Intel® Distribution of OpenVINO Toolkit helped us train our neural networks, optimize algorithms, and shorten time to market by speeding up development,” says Rekish. “In addition, Intel software development tools help us roll out new features faster if our customers ask for them.”

AI, Digital Signage, and the Future of Retail

In the years ahead, DISPL and other AI digital signage providers will have plenty of development work and features requests to handle—because the platforms they offer are just getting started, according to Rekish.

For one thing, there will be AI digital signage opportunities for systems integrators and solutions developers in other verticals and industries: from smart cities and museums to banking, security, and logistics.

In addition, there will be further enhancements within the retail space. DISPL, for example, is already looking for ways to use AI to automate A/B media testing and leverage computer vision to simplify point-of-sale age verification for restricted products.

In short, the possibilities of AI digital signage are enormous for solutions providers, SIs, and brands—both inside and outside the retail space.

“AI digital signage solutions can be implemented in so many ways that the technology is going to spread far beyond retail,” says Rekish. “Offline will become more like online in the future—offering cost savings, providing easier customer experiences, and making our physical world more interactive than ever before.”

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

This article was originally published on June 7, 2023.