Edge IPCs: Powerful Platform for Smart Traffic Management

Smart traffic management solutions help improve traffic flow, enhance commuter safety, and meet sustainability goals. Now, the integration of these solutions with edge AI and computer vision is unlocking even more benefits.

Computer vision and edge analytics allow traffic management systems to perform a multitude of functions: high-speed vehicle identification, real-time traffic analytics, roadside device management for improved flow and safety, and automated traffic control processes to eliminate delays.

But despite their many advantages, implementing these solutions can be challenging.

“It’s difficult to find a computing platform that can handle complex computer vision workloads at the edge and function well under harsh environmental conditions,” says Colin Cheng, Product Manager at Shenzhen Jhc Technology Development Company Co., Ltd (JHCTECH), an industrial computing specialist. “In addition, solutions designed for highly specific use cases cannot always be adapted to new ones or modified post-deployment. This creates high infrastructure investment and ongoing maintenance costs for governments.”

Fortunately, a new breed of industrial PCs (IPCs) offers a way forward. Based on ruggedized hardware built for performance at the edge and leveraging powerful remote management tools, edge IPCs provide a flexible computer vision platform for the smart traffic management solutions that our world needs.

Edge IPCs provide a flexible #ComputerVision platform for the #SmartTraffic management solutions that our world needs. Shenzhen Jhc Technology Development Co., Ltd via @insightdottech 

Computer Vision and Edge IPCs Make Electronic Toll Collection Smarter

An example of how edge IPCs enable complex computer vision processing that solves traffic management problems is their use as a platform for electronic toll collection (ETC) systems.

As every commuter knows, toll booths on highways, bridges, and tunnels are a common cause of delays. The result is headaches for drivers and an increase in harmful CO2 emissions caused by idling vehicles. Toll collection sites are ideal candidates for an upgrade to automated, computer vision-based ETC solutions that eliminate the need for time-consuming manual fare collection (Figure 1).

Image of three toll gates that show vehicle detection from the front, side, and via license plate recognition.
Figure 1. Edge IPC-based ETC uses computer vision to automate fare collection at borders and checkpoints. (Source: JHCTECH)

With a sufficiently powerful computing platform in place, computer vision and AI can be leveraged to fully automate toll collection. JHCTECH’s ETC solution is a good example of how it works:

  • Cameras and sensors gather real-time vehicle data for video processing even when vehicles are moving at high speeds.
  • Vehicle-specific fare data is calculated at the edge and then forwarded to a centralized Ministry of Transportation back-end server for billing.
  • Device control at the toll collection site enables full automation. While IP cameras and roadside units establish wireless links with passing vehicles, traffic and warning signals, and barrier gates.
  • Remote management technology enables simple and cost-effective system maintenance.

A major advantage of the solution is that it is based on a durable and flexible industrial personal computer. The solution’s IPC thus offers a number of important benefits in its own right, because it is:

  • Configurable to meet a wide variety of use cases, including border gantry stations, entrance and exit points at non-gantry toll collection sites, and so forth.
  • Built from hardware components that can withstand exposure to the elements and ensure operational stability throughout a wide temperature range.
  • Able to support multiple I/O interfaces, allowing the system to be easily configured to accommodate new types of lane devices and sensors.

The result is a comprehensive ETC solution that is powerful, stable, and rugged enough to perform complex visual processing workloads at the remote edge—and flexible enough to accommodate multiple use cases and future integration with other technologies if needed.

Intel technology played a crucial role in building a performant, customizable solution.

“Intel processors excel at handling edge computer vision workloads, and their processor line is extensive, so there’s always going to be an option to suit whatever performance requirement you need,” says Cheng. “The remote management capabilities of the Intel vPro® platform and the hardware-based Intel® Active Management Technology (Intel® AMT) are also a great help in bringing our smart traffic management solution to market.”

Immediate Benefits and Infrastructure for the Long Run

JHCTECH’s deployment with China’s Ministry of Transportation shows how effectively edge analytics and computer vision can be wielded by governments and businesses to help solve traffic management challenges.

The company collaborated with China’s Ministry of Transportation to replace several legacy province-border physical toll stations with their IPC-based ETC solution. The results were striking. The average time for a passenger vehicle to clear the upgraded provincial border station dropped from 15 to 2 seconds. For cargo vehicles, the improvement was even more dramatic, falling from 29 to 3 seconds on average.

Best of all, the flexibility of these solutions means that they deliver immediate benefits and offer an excellent foundation for developing traffic management infrastructure over the long term.

“ETC is only the beginning. The solutions currently being deployed provide the infrastructure to support additional upgrades in the future,” says Cheng. “We’re already looking at capabilities like vehicle-infrastructure integration systems to warn drivers of road hazards and data analytics tools to help traffic engineers better understand and control traffic flow.”

In other words, transportation authorities that deploy these systems today are not just solving their most urgent traffic management challenges—they’re also positioning themselves to address far more complex ones down the road.

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Machine Vision Solutions: Detect and Prevent Defects

Reducing quality defects—and the effort and costs they involve—is one of the biggest challenges facing the manufacturing industry today. Having to reengineer, rework, or even refund products for customers because they don’t work as expected, or are not of an acceptable quality standard, can have a huge financial impact to a business’s revenue (up to 40%). That’s why many look for ways to prevent defects from making their way out the door.

But it’s more than just identifying defects. The rise of Industry 4.0 puts pressure on manufacturers to make their factories smarter. To be successful and remain competitive, they need to uncover ways to prevent defects from even happening in the first place, and that requires an understanding of why and where quality issues happen.

Many have turned to machine vision solutions to provide defect detection, but up until recently these systems were difficult to deploy, maintain long term, scale, and go beyond just detecting anomalies.

Luckily, vision solution provider Eigen Innovations offers software and services designed to get users as close to zero-defect manufacturing as possible.

“It’s about detecting and preventing defects, but also leveraging process data to help manufacturers understand more about what’s happening within their process while they are occurring,” says Jonathan Weiss, Chief Revenue Officer at Eigen Innovations.

Equipping Manufacturers with Smart Vision

Eigen does this by focusing on interoperability first and foremost. The company develops solutions that integrate directly into PLCs and support just about any industry-standard camera or sensor hardware available so manufacturers can easily get machine vision systems up and running.

Its intuitive user interface allows manufacturing companies to design and manage customized vision systems that can perform in-line quality inspection in real time, ensure presence of parts and components, optimize processes, and streamline root cause analysis for defects.

“It’s about detecting and preventing defects, but also leveraging process #data to help #manufacturers understand more about what’s happening within their process while they are occurring” — Jonathan Weiss, @EigenInnovation via @insightdottech

For example, when a large global pulp and paper manufacturer struggled with quality control for its large rolls of high-gloss paper and laminate coating, it turned to Eigen Innovations to implement a machine vision system.

“They were having an issue related to coating buildup that caused streaks in their specialty, high-gloss paper product,” says Weiss. The company had no way to verify that the coating was applied evenly. Weiss adds, “If it was not, even for only 8-10 seconds, it caused an unplanned downtime and stopped the equipment from functioning.”

By applying a smart vision system with the help of Eigen, the paper manufacturer was able to spot patterns within the laminate application process and identify areas where it had buildup. Being able to understand the root cause of the buildup and getting real-time alerts into when the issue started to occur, the manufacturer was able to save well over $1 million a year, Weiss explains.

“A vision system needs to be able to identify a defect—and do something about it,” Weiss says. (Video 1) “Because it can communicate to the control network, our solution allows manufacturers to receive real-time alerts and trigger automated responses when issues are detected.”

Video 1. Eigen Innovations’ smart vision for the smart factory captures data that enables manufacturers to go beyond quality inspection. (Source: Eigen Innovations)

Beyond inline quality inspection, real-time monitoring, and process optimization, Eigen can help manufacturers deal with the demand of regular required inspections.

For example, an automotive OEM that manufactures plastic components can produce upward of 15,000 parts per week, per plant. That’s 42,000 points of inspection it is expected to make in each of these facilities. This amount is not only impossible to handle manually, but the types of defects the manufacturer needs to look for—such as problems with weld integrity—are not easily identifiable with the human eye.

Originally, the OEM thought to pull random samples and perform destructive testing to check component integrity, but this resulted in unnecessary waste and rework, and could not guarantee all defects would be caught before shipping product to a customer.

“Ultimately, they needed an automated way to guarantee the quality and volume every week,” Weiss says.

By partnering with Eigen, it created a solution that leveraged thermal cameras to capture various views of the weld process. Those images are then fused together to create a digital twin of the part, and critical process data is mapped to the inspection area, offering real-time insights that the human eye could not provide.

“Every part is going through a verification process in real time, within seconds or milliseconds,” Weiss explains. “The sheer scale of what they are now achieving was impossible when they were relying only on human eyes.”

Continuously Improving Machine Vision Solutions

Eigen prides itself on providing user-friendly machine vision solutions. Machine operators can help train and label models in real time, ensuring that the solutions gain better accuracy and performance as time goes on.

“It’s so easy to use that we have machine operators and quality engineers doing machine learning without even knowing they are doing machine learning,” Weiss says. “For example, if they see a scratch on a surface that shouldn’t be there, they can flag it, update the model, and the software will recognize similar scratches in the future.”

The company’s multi-layer partnership with Intel allows it to quickly test, validate, adopt, and then ultimately introduce machine vision into factories. And with the OpenVINO toolkit, Eigen is not only able to optimize its model development and performance for users but work with a variety of different cameras and hardware depending on the use case.

“We have a lot of customers who have already tried vision systems, and the fact that we can use existing hardware is appealing to them,” says Weiss. “They don’t have to go through another large capital expense.”

Machine Vision of the Future

Going forward, Eigen sees machine vision continuing to play a significant role in the manufacturing industry. As manufacturers face labor shortages and the inability to find skilled workers, machine vision solutions will be able to step in and fill in the gaps.

“Vision systems are going to be the eyes of the operators that no longer exist in that workforce,” says Weiss. “Our solution gives the people who are in the factory the tools they need to effectively do their jobs at the highest standards.”

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

The Business Value of Data-Driven Cultures

It should come as no surprise to anyone in the IoT landscape that data is key to business success. Computer vision has opened a whole new view into business operations, which has led to a need to collect, manage, and analyze all that data, which in turn opens the door to AI that offers insights and can lead to valuable changes. Now the elements are in place for what we might call the data-driven culture of not just one industry but many, from manufacturing to smart cities to dining.

But what exactly does that mean, “data-driven culture”? And does it just create more complexity and additional challenges for businesses and organizations trying to implement it? We talk to two people who know a lot about getting the most out of data-driven culture: Atif Kureishy, Founder of AI and automation retail solution provider Vistry, and Saransh Karira, Head of Engineering at Awiros, a video AI OS and marketplace company (Video 1). They discuss its benefits and challenges, and how a data-driven culture can connect different aspects of a business to create real value.

Video 1. Awiros and Vistry explain the value of creating data-driven cultures and the strategies to successfully implement them. (Source: insight.tech)

When we talk about data-driven cultures, what does that really mean?

Atif Kureishy: Data-driven culture is really about making decisions that are evidence based—decisions that are grounded in the understanding of data coming from your enterprise, and being able to trust that data, analyze it, and derive key understanding from it. Then, ultimately, making decisions that drive strategic advancements and strategic initiatives.

The first generation of data-driven culture was really about data acquisition and data understanding. The second phase of that journey, going on for the last decade or so, was then starting to do prediction on top of that, which introduces a lot of concepts in the machine learning space. And now I think we’re on the third generation with the introduction of large language models, LLMs.

And rather than having very human data science, or data-engineering-intensive activities, now we’re moving towards AI-based systems that tend to be smarter than us. And so how do we share a large corpus of enterprise data with those LLMs in a trustworthy way to make decisions that are informed in the enterprise.

Saransh Karira: At an earlier point, data policies were like an umbrella term for any kind of data. But in the last three to four years we have seen tremendous changes in the landscape, and now people are becoming aware that the amount of data that you give to the system is the amount of precision that you get from the system.

How are computer-vision AI applications making data more valuable?

Saransh Karira: It’s those changes in data policies: They make the data a lot more accessible. The raw data is the first step, then once you have this raw data, applying intelligence on it. But now let’s say you have thousands of hours of data—even when you have access to that data, it’s not really accessible; you cannot sift through it. So that’s where the systems come in—the intelligence systems, the machine learning systems. It’s all changing very rapidly.

And because of that, a lot of infrastructure is being built for integrating a lot of data. I think the value of data is when you can connect a lot of different types of data. So, if you take each data as a dot and then you can connect them together, the sum is more than the parts. A lot of our customers are connecting data throughout their different infrastructures or their different divisions.

One use case—but it extends to a lot of different organizations—is that we work with government extensively, and what we are seeing currently, for instance, is that they are connecting vehicle preregistration with cameras and then with passports. The interconnected data becomes much more valuable than one system that is standing just in a silo.

What kinds of challenges face the businesses you work with?

Atif Kureishy: At Vistry we are focused on the restaurant-hospitality space. It is a very people-oriented business, high velocity and relatively unsophisticated. Those businesses are starting to make a lot more technology investments, but historically that’s not been the case. So any type of capability that gets deployed and scaled across a large number of locations has to be very cost-effective.

And a lot of the things that we are tracking are objects in the kitchen, which makes for a unique environment. For sure, our training infrastructure has to be robust to be able to detect and track and understand the activities that are occurring in that environment.

“#Data-driven culture is really about making decisions that are evidence based” — Atif Kureishy, @vistryai via @insightdottech

This is where I think Intel especially has brought a unique value proposition, in the sense that you can run on commodity compute that’s right there in the restaurant. Or potentially deploy next-generation compute and have machine and deep learning models that can run effectively there at the edge. Some of the technologies around OpenVINO and deep learning tools that the Intel group has provided have helped tremendously. So we can run our inference workloads on Intel Atom® tablets, on i7 Tiger Lakes, on the new Alder Lakes very easily, and can optimize runtimes effectively. That’s been incredibly useful for us and for our customers.

How are you creating data-driven cultures and strategies for those businesses?

Atif Kureishy: Let’s take the example of production control—and a restaurant essentially is a mini manufacturing site. In a manufacturing sense, you have measurement of inventory, and you have QA and oversight of work products. And so if you apply that into a restaurant space, imagine that you have orders coming in through digital, through the drive-through, through dine-in. And when those orders get acquired, they get consolidated into a kitchen that needs to essentially manufacture the orders correctly, if you will.

Now one of the areas where AI and ML are coming into play is that you can create a production schedule in a quick-serve or fast-food restaurant where certain products are premade and then held. That is the ideal scenario because it allows food to get out as quickly as possible. So you build and manufacture those menu items most efficiently by predicting how many and what type of inbound orders you’re going to get. This also allows the kitchen to be much more efficient not only from a labor perspective but also from a food-waste perspective.

Another aspect where we’ve been using computer vision is with inventory management—having cameras that can look at a bowl or a pan and do volumetric estimation of how much product is in those pans to help inform that production schedule. And that, from a lean-manufacturing perspective, is sort of like the just-in-time concept. So, modeling demand and then using AI to ensure that the supply is there. That is how the optimization of the restaurant is becoming more data-driven.

If we think about what the culture of a restaurant was 20 years ago, it was really reliant on people—managers using their intuition: “I expect a lunchtime rush today. There’s going to be a field trip coming in on top of the usuals, and here’s how I’m going to place people.” And, by the way, there are a ton of restaurants, especially small restaurants and local restaurants, that still run that way. But when you look at the larger brands, they’re absolutely moving to more of this data-driven culture.

I wanted to highlight what the historical culture is in the restaurant, because I think it’s important to understand that, and then to understand how it makes sense that we’re now using data to serve the customer more effectively.

Saransh, what are the use cases you’ve encountered at Awiros?

Saransh Karira: One use case was a deployment with multiple different campuses, and for each campus there were multiple different access points. The original implementation was just to see how many people were coming in and how many of them were visitors—basically, how many of them had access to the site and how many of them were there for the first time. That was the initial use case.

But the customer then used that information to change the configuration of their security personnel depending on where people were—where there was more of a crowd, they added security there and reduced it from the other access points. So that was very interesting to see.

We have also seen a lot of what we can call meta-analytics use cases, especially in retail. For example, our customers can improve store layout and operations by being able to see patterns in foot traffic. Where meta-analytics comes in is to basically generate a heat map to visualize where the footfall is more and where it is less, and depending on that data our customer can change the configuration and placement of products.

What is the value of working with partners like Intel to promote data-driven cultures?

Atif Kureishy: We are very thankful for our partnership with Intel. It takes a village, or it takes a broad ecosystem, to make this all work. I would say it’s around ODMs and OEMs that are providing the Intel base compute, and also working with the systems integration teams that ultimately need to place edge devices and sensors at the locations so that this processing can occur.

And, of course, having a cloud-based infrastructure, we work very closely with AWS. And so Intel is a key part of facilitating the dialogues and interactions with that larger community.

And then, of course, there’s the robust set of tooling and infrastructure that’s provided around OpenVINO. That’s been great for us. It allows us to optimize the types of processing that we’re running on CPU or on the iGPU—integrated GPU. There’s also good support in working with the open-source community and the various deep learning frameworks that are out there. That has been wonderful.

Saransh Karira: With our platform at Awiros, we are trying to create an ecosystem of video-intelligence applications. Basically, it starts with the hardware, it goes to use cases, and then it goes to the marketplace. The hardware is where Intel comes in. And then on top of that there are different use cases that are being developed by different researchers or any of the third-party developers. And on top of that there is a layer of marketplace, which is what is visible to the end customers.

I think at the edge Intel is very cost-effective for us, first of all. And its libraries have helped us a lot in optimizations, be it for inferencing—the actual part where the AI runs—as well as the decoding part of the video, and many other things. Also, the support is very, very wide.

Final thoughts? What does the future of data-driven culture look like for businesses?

Atif Kureishy: We, like everyone else, have gotten on the GenAI bandwagon, if you will, and have really worked extensively with models like GPT-4 for the last several months. A lot of our focus for the first couple of years was generating, let’s call it dark data. How do we apply computer-vision workloads at the edge to create a data stream of physical observations?

And then that data needs to be stitched into a larger baseline or foundation of data that’s coming from the point of sale, coming from inventory-management systems, coming from time-reporting systems. And so we’ve been looking at LLMs to really interact with a larger and broader set of data and make sense of it. The ability to do that very quickly is really fascinating and phenomenal.

So if I were to leave this audience with something, it is that beyond ChatGPT and getting recipes and looking for travel itineraries and generating poems—which I’ve done with my kids, and we have a lot of fun doing that—this new wave of AI really does have big implications for the enterprise, and we’re excited to be a part of that journey.

Related Content

To learn more about creating data-driven cultures, listen to Transform Your Organization with Data-Driven Decisions.

For the latest innovations from Awiros and Vistry, follow them on:

 

This article was edited by Erin Noble, copy editor.

AI Traffic Management: The Road to Sustainable Smart Cities

Every city driver knows the frustration of spending hours in traffic congestion—and the anxiety over safety on crowded, chaotic roads.

City managers and traffic engineers have the same worries plus a few more. “The most urgent challenge is to reduce accident rates through efficient, demand-oriented traffic management,” says Gurur Yildiz, Lead Systems Engineer at ISSD Electronics, a maker of intelligent traffic management solutions. “But there are also important quality-of-life issues to consider as well, such as big-picture challenges like cutting CO emissions.”

It’s a difficult situation for everyone involved. But the good news is that AI deep-learning technology and next-generation, high-performance processors have enabled intelligent transportation systems (ITS). These solutions offer an answer to the global challenges of traffic management—and open the door to a safer, more efficient, and more sustainable future.

AI Traffic Management in Action

ISSD municipal customer implementations show how intelligent transportation systems can deliver dramatic results in the most challenging of traffic management scenarios.

The company’s deployment in Konya, Turkey illustrates how ITS can be leveraged to modernize transportation even in unlikely venues. Konya is a truly ancient place: a site of human habitation since 3000 BCE. Today, it is one of Turkey’s largest cities, with a population of more than 2 million. And that number often swells due to the influx of visitors, tourists, and religious pilgrims to the city’s numerous shines and archeological sites.

Modern Konya is a beautiful and beguiling mix of old and new. But that has created some serious traffic management challenges. “The urban planning in Konya simply isn’t adequate to the current needs of the city,” says Yildiz. “As a result, there is tremendous congestion during rush hours and around the big mosques and tourist sites.”

ISSD worked with Konya’s municipal authorities to deploy an ITS to alleviate these pain points. They installed a network of smart cameras throughout the city to help manage traffic flow. The cameras can calculate average occupancy and vehicle count in real time, decide which traffic lanes should be given a green light and for how long, and change traffic signals accordingly.

The results of the new system are impressive. Wait times at Konya’s traffic junctions are down 30%. Carbon emissions have fallen by 40%. In addition, the data insights provided by the system have allowed traffic engineering teams to create detailed simulations of traffic flow and make changes to optimization efficiency.

Another ISSD implementation is in Istanbul. For years, the Ministry of Transportation and the local tollway operator had struggled with a stubborn class of accidents at tunnel entrances and toll booths on the Northern Istanbul Highway. Most frustrating of all: These accidents never should have happened in the first place. They were being caused by drivers of large trucks who didn’t realize that their vehicles were too high to clear a tunnel opening or toll gantry.

ISSD implemented an intelligent transportation system that detects over-height vehicles approaching these critical locations. Smart cameras scan oncoming traffic for potential problem vehicles. If an over-height vehicle is identified, its license plate information is broadcast to the tollway’s overhead electronic display screens to warn truck drivers that they are in imminent danger of crashing and allow them to find an alternate route.

The number of over-height vehicle crashes on the highway has fallen from an average of one or two accidents per month to zero incidents for the entire year.

AI and Hardware That Enable Intelligent Transportation Systems

Significant results like these are characteristic of newer intelligent transportation systems, which have managed to overcome many of the limitations of their predecessors.

Legacy incident detection systems used image-processing algorithms wholly dependent on CPUs—a costly and difficult-to-scale approach. In addition, these systems struggled with image-processing accuracy under adverse weather conditions and when using data from pan-tilt-zoom (PTZ) cameras.

A modern ITS relies on VPU-accelerated, AI-enabled automatic incident detection (AID). This is why they outperform older systems at visual processing tasks and tend to be more cost-effective as well.

ISSD’s solution, for example, sends traffic camera data to a centralized server optimized for visual processing. The server is equipped with Intel VPUs built to handle computer vision workloads in parallel. They also run ISSD’s fine-tuned SPECTO visual processing software, which leverages the AI deep learning capabilities of the Intel® OpenVINO toolkit. The system CPUs are freed from inferencing tasks, and control only response behaviors such as sending alerts to drivers and operators.

This combination of AI optimization and workload differentiation makes the overall solution remarkably fast. If an incident is detected, human traffic safety officials are alerted in less than 10 seconds—and automated responses are taken in near-real time via integrations with SCADA systems and roadside traffic equipment.

Yildiz credits ISSD’s technology partnership with Intel to making this type of deep-learning-optimized processing possible: “OpenVINO was a technological breakthrough for us. It has a direct impact on the overall product performance by optimizing and improving the efficiency of deep-learning models we are using in our algorithms.”

#IntelligentTransportation systems deliver impressive results. But just as important, the innovators behind these systems are taking a holistic, forward-looking approach to solutions development. @issdelectronics via @insightdottech

Building the Future of Transportation

Intelligent transportation systems deliver impressive results. But just as important, the innovators behind these systems are taking a holistic, forward-looking approach to solutions development. And that bodes well for the future.

ISSD incorporates software masking and anonymization algorithms into its solutions—future-proofing their development work against cybersecurity and digital privacy regulations for today and in the coming years.

The company is also looking at how to adapt existing technology to other use cases and verticals. “We are working on complementary use cases like electronic toll collection systems and intelligent parking systems,” says Yildiz.

Longer term, ISSD’s R&D teams are laying the groundwork for Cooperative Intelligent Transportation Systems (CITS) that will one day broadcast safety alerts directly to drivers in their vehicles. Remarkably, they’re also preparing for a traffic management future where commercial and private vehicles have taken flight, says Yildiz: “We are currently planning for the coming age of flying vehicles and drones by exploring flight-based logistics and traffic management.”

Intelligent traffic management will usher in a world in which transportation is safer, more efficient, and more sustainable. But despite all the high technology involved, Yildiz expresses the ultimate goal of his company’s work in very human terms: “Intelligent transportation systems can save lives by reducing accidents. And they improve our day-to-day quality of life by making travel more efficient and giving drivers back all those lost hours.”

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Software-Centric Automation Transforms Process Industry

Proprietary distributed control systems and hardware that run many industrial plants are nearing obsolescence. The replacement of these systems is complex, expensive, and typically requires downtime. In addition to these challenges, workers with specialized automation knowledge are aging out of the workforce.

Automation and digitalization offer an alternative. Plants can be fitted with software-centric systems that provide flexibility, high availability, and resilience while also supporting sustainability goals. One such solution uses open-standards technology for automation by decoupling control software from hardware. Software-centric automation enables manufacturers to respond to market demands, scale as needed, minimize obsolescence, avoid operational disruption, and optimize energy use.

This solution comes from a partnership between Schneider Electric, Red Hat, and Intel. It leverages Schneider’s EcoStruxture Automation Expert (Soft dPAC), Intel’s Edge Controls for Industrial, and Red Hat Ansible.

“We’re looking to transform from an old proprietary embedded controller architecture to software-centric automation,” says Michael Martinez, Leader of Schneider’s Global Distributed Control Systems.

EcoStruxure Automation Expert uses containerization and orchestration to improve useability, lower cost of ownership, and avoid process disruption. “By leveraging these technologies, we are making the software agnostic to the hardware platform it runs on. We can actually load the control application at a different location or on a different server or even in a different compute capacity. It’s a new way of thinking about automation, and that’s what is going to actually provide us with the resilience and the flexibility that users require,” Martinez says.

The approach ensures continuous operations of processes with zero production interruptions. “Most of our customers operate in what we call a continuous process facility, where they can’t shut down,” Martinez says. Interruptions in power generation could lead to a blackout. In a refinery or chemical production plant, they could cause explosions or spills.

Software Orchestration for Continuous Operation

Because of this need for continuous operation, replacing technology at process manufacturing plants becomes complex, timely, and costly. The service, maintenance, and modernizing of proprietary systems, Martinez says, “often require significant outages and turnarounds.”

Red Hat’s Ansible orchestration capabilities are a key part of the overall solution. The new automation solution handles any repetitive and tedious tasks, such as loading software onto a machine or shifting a workload to a different location. “If we have an issue with one of the devices, we use the orchestrator to redeploy the process control application to another healthy device,” Martinez says. This way, workforces can better focus on more innovative activities.

Software-Centric Automation Shortens Learning Curve

Replacing proprietary systems takes a detailed plan and meticulous execution to prevent any interruptions during production. Control systems from the 1980s employed proprietary programming languages that were difficult to translate and required special knowledge. Many companies still use them today.

“It enables a #workforce to be more versatile as deep proprietary knowledge of an #automation system is not required anymore; we are driving to an outcome-based solution” — Tina Volkringer, @SchneiderElec via @insightdottech

It’s a significant pain point addressed by the software-centric approach, says Tina Volkringer, Schneider’s Vice President of Process Automation. “It enables a workforce to be more versatile as deep proprietary knowledge of an automation system is not required anymore; we are driving to an outcome-based solution. We are aiming to deliver plug-and-produce functionality.”

This plug-and-play approach solves another problem: finding qualified talent to run legacy equipment, which is quickly dwindling as workers retire.

“What we’re talking about is a more open standards-based language that is well known by most automation engineers,” says Andre Babineau, Schneider’s Director of Strategic Initiatives. “This way, they can contribute immediately to the value of their process without having to go through some intermediate translation to some proprietary system, proprietary language, or a set of tools that you have to use to control your process.”

Scalability is another benefit. Replicating processes can be a challenge for operators, requiring additional controllers and infrastructure. But EcoStruxure Automation Expert simplifies the process of replicating a tank, pump, or other processes with minimal effort,” Martinez says. It is driven by a system approach. The application is written for optimizing the yield, and the hardware it runs on is selected only in the last step.

The solution between Schneider, Red Hat, and Intel has the potential to transform process automation, setting the stage for future developments. Leveraging orchestration, open standards, and partnerships enables companies to build an automation solution to minimize interruptions, reduce cost of ownership, and reduce the impact of obsolescence. This solution is a path to develop fully autonomous production facilities. While Martinez doesn’t foresee complete autonomy in the near future, he envisions a future where a software-centric automation system works side by side with humans to drive new levels of efficiency, flexibility, and resilience.

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Applying Industrial AI Models to Product Quality Inspection

As a new car moves through the final stages of assembly, inspectors check every inch to spot any inconsistencies. Anything from chipped paint to wheel defects to irregular car engine sounds can tarnish the final product. Traditionally, these inspections are performed manually, but now workers can get much-needed assistance from artificial intelligence.

No matter how good an inspector is, humans miss details. Factory floors are noisy, busy places that can create distractions. Repeating the same tasks for hours can also cause the mind to wander, leading to errors. But this isn’t a problem for AI, which leverages cameras, microphones, and sensors in its unrelenting search for perfection on the assembly line.

“Visual inspection is really a tedious job. When you work in an industrial environment, your quality of work may degrade over time in a noisy environment. With AI, you can automate the process,” says Marcin Rojek, Cofounder of byteLAKE, the developer of Cognitive Services, a set of Industry 4.0-focused AI models that handle quality control.

The reason byteLAKE’s Cognitive Services exists is to deliver actionable information to operators that enables better decisions, says Rojek.

Unlike most industrial AI solutions, byteLAKE does more than just deal with visual improvements through computer vision. The company uses AI models for sound analysis and infrastructure monitoring. Leveraging microphones and other sensors, byteLAKE’s Cognitive Services can detect temperature, humidity, and vibrations to monitor equipment in efforts to optimize service delivery and prevent failures.

Cognitive Services Turns Data into Insights

When Rojek and his friend and business partner Mariusz Kolanko cofounded byteLAKE in 2016, they wanted to tackle the issue of what to do with all the data industrial organizations capture. Many struggle with just how to use it.

“We wanted to turn AI into a tangible solution for industrial cases. We combine the data and transform it into information, answering questions like, ‘What will happen, what will likely happen, why something happened, where’s the fault, what’s the error, and what’s the root cause?’” Rojek says.

This is possible by putting data from different sources in the proper context.

“Visual inspection is really a tedious job. When you work in an #industrial environment, your quality of work may degrade over time in a noisy environment. With #AI, you can automate the process” – Marcin Rojek, @byteLAKEGlobal via @insightdottech

In manufacturing, computer vision algorithms analyze and interpret images captured by cameras along the line. The models can be trained to understand certain images and detect things like scratches, dents, and missing holes, among other things.

In car manufacturing, microphones capture the pitch and roar of engines to determine if they run properly. It’s another area where human limitations can get in the way. “When you listen to tens of car engines in a factory facility with all the noise in the background, which is constantly changing, then your quality of inspection might degrade,” Rojek says.

And to ensure that all the information is collected in real time and stays at the factory level, the technology runs securely at the edge. This allows users to process the data close to where it is being produced, allowing them to overcome bandwidth and intermittent connectivity issues.

byteLAKE is also using computer vision in the food service industry to reduce wait times at self-serve restaurant checkouts. “The cashier doesn’t have to enter everything manually to the machine because the camera takes a picture and recognizes the items,” Rojek says.

In other environments, such as energy infrastructure, byteLAKE uses a combination of sensors, cameras, and microphones to track conditions such as liquid flow, humidity levels, pressure, and temperature—all of which provide information about the health and performance of pipes, pumps, drives, and other components. This helps optimize operations and resource utilization, reduce waste, and ultimately deliver better service.

“We can predict what will likely happen and suggest the optimal configuration of the energy management system for a whole city—where we need to plan in advance how much energy we should order within the next week based on current consumption, predicted consumption, historical data, weather forecasts, and so on,” says Rojek.

AI in Manufacturing Supplements Humans

While byteLAKE’s Cognitive Services are meant to replace repetitive, mundane, time-consuming, and error-prone tasks, Rojek views the solution as complementary to human work. Customers, he explains, don’t seem concerned about displacing humans because AI is solving problems such as workforce shortages. AI also contributes to worker safety. For instance, cameras and sensors allow humans to stand farther from perilous equipment on the assembly line.

byteLAKE also works with various partners on customer-specific solutions. Partners combine Cognitive Services with their own software and hardware automation to design workflows.

Existing models from previous implementations can be customized for new customers. For example, a paper mill model can be used in another plant by making adjustments for different lighting, production line dimensions, and other specifications.

Intel is an essential partner in making all of this happen. byteLAKE participates in programs such as AI Builders and leverages the OpenVINO toolkit to optimize its solutions and lower development costs.

Going forward, Cognitive Services will continue adding functionality. byteLAKE is working on models that learn on their own so they “can improve over time automatically.” In the near future, Rojek expects models will learn on the fly “to improve the quality of predictions as you progress and generate more data.”

Long term, the company will focus on easier integration with manufacturing software. “We don’t want to reinvent the wheel and we don’t want to change their processes too much on the manufacturing side. We want to supplement their operations and become part of their existing workflows rather than turn everything upside down,” Rojek explains.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Inside the Development of Autonomous Mobile Robots

Are we in for an autonomous mobile robot (AMR) takeover? Not exactly. But we can expect to see more AMRs implemented across industries to streamline production, improve operations, enhance work safety, and increase productivity. That’s because recent technology advancements have made the hardware and software components necessary to building these systems more cost-effective and accessible. And with artificial intelligence becoming more mainstream, it’s now possible not only to create AMRs capable of doing human tasks but integrate them into human worker environments.

In this episode, we talk about how all this is possible today, what we can expect in the future, top applications and opportunities for industries, and all the ins and outs of developing, deploying, and implementing AMRs safely and securely.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guests: congatec and Real-Time Systems

Our guests this episode are Claire Liu, Senior Product Marketing Manager for congatec’s industrial automation and robotic product line, and Timo Kuehn, Systems Architect and Product Manager at Real-Time Systems, a provider of embedded and real-time solutions.

Prior to joining congatec, Claire worked as the AI and robotics Product Marketing Lead at ADLINK Technology and was a Senior Product Marketing Specialist at MOXA.

Timo has worked with Real-Time Systems for more than 17 years as a systems architect, product manager, and software architect. Before Real-Time Systems, he worked as a software engineer at KUKA.

Podcast Topics

Claire and Timo answer our questions about:

  • (2:27) The meaning of autonomous mobile robots
  • (4:59) Implementing AMRs safely on the factory floor
  • (7:33) Developing AMRs to meet real-time demand
  • (11:46) Taking a modular approach when designing AMRs
  • (13:46) Tools and technologies necessary for AMR development
  • (17:37) Biggest use cases and opportunities for AMRs today
  • (21:04) AMR advancements we can expect

Related Content

To learn more about autonomous mobile robots, read IoT Virtualization Jump-Starts Collaborative Robots. For the latest innovations from congatec and Real-Time Systems, follow them on Twitter at @congatecAG and LinkedIn at congatec and Real-Time Systems GmbH.

Transcript

Christina Cardoza: Hello and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re going to be talking about autonomous mobile robots, which is an industrial trend I’m happy to finally see gaining more traction in manufacturing space. So, joining me to tell me more and to talk about this topic we have Claire Liu from congatec and Timo Kuehn from Real-Time Systems. So, before we get started, Timo, I’d love to learn a little bit more about yourself and what you do at Real-Time Systems.

Timo Kuehn: Hi, my name is Timo. I’m a Systems Architect and Product Manager at Real-Time Systems. I’ve been part of RTS since day one, which is 17 years ago. Before joining RTS I was a software engineer at the robot manufacturer Kuka, and now I’m happy to share my knowledge and insight with you.

Christina Cardoza: Yeah, absolutely. Wow, 17 years. So I’m sure you’ve seen this space evolve quite a bit over your time at RTS. Claire, please tell me more about yourself and congatec.

Claire Liu: Hello Christina. Thank you for having me. And hello everyone, I’m Claire, the Product Marketing Manager from congatec. And congatec is a company that provides computer modules for embedded applications. My role in the company is to look at congatec’s products and the technology to see how they apply into the robotic market, like in autonomous mobile robots, and see how congatec can help autonomous mobile robot companies to take all the challenges in the competitive environment with computer and module concepts.

Christina Cardoza: Yeah, absolutely. And I’m sure there’s a lot of complexity and challenges in the computer market that, when you’re trying to add these autonomous mobile robots into the factory, that businesses come across. And so I, like I said in my intro, I’m very excited to talk about this topic because I feel like autonomous mobile robots—this is something that we were dreaming about a couple of years ago. You know, something that seems like science fiction, but over the last couple of years or even in the last year or so we’re seeing them become more prevalent in the factory space.

So, Claire, I just want to start this conversation with you talking about, when we say autonomous mobile robots, what are we actually talking about? And why is this, as of lately, taking the industrial space by storm?

Claire Liu: Autonomous mobile robots, robots or robotic systems, they are capable of operating independently without direct human intervention. And those robots are equipped with defense sensors, artificial intelligence algorithms, and a sophisticated control system that enables them to navigate autonomously, to perceive their environment, and to make decisions. And right now why the industry is increasingly interested in autonomous mobile robots is because the benefit, the many benefits they can offer, for example like material-handling tasks.

These material-handling tasks used to be executed manually; workers need to perform those tasks to pick up and deliver the raw material and the in-process product on the production line. And those tasks actually acquire repetitiveness and can post a risk on workers’ health and safety. Right now, by using autonomous mobile robots to automate the material-handling process, to transport the products on the factory floor, workers right now, they don’t have to waste their own production time to do those manual work and tasks. They can focus on highly skilled and more value-added tasks.

So by using autonomous mobile robots in the manufacturing environment, it streamlines the manufacturing process, can increase productivity, can improve operation efficiency, and enhance workers’ safety.

Christina Cardoza: Yeah, that’s great. And you mentioned these are robots that are moving on their own. You mentioned a couple of different components in there, and obviously the benefits are improving worker safety and allowing them to focus on more important tasks that they, instead of doing these repetitive tasks, we can get the robots to do them, but we also see these robots working alongside workers.

And so I’m curious, because we have these robots moving around independently, how do we ensure that they don’t hit other workers, they don’t mess up the production line, they don’t mess up equipment. So how exactly—if you could give us a little bit more insight into how they work. I know you mentioned AI and all of that. So how does this all go into that?

Claire Liu: Yes, of course. Autonomous mobile robots rely on a combination of technologies including sensors, AI algorithms, motion control, wireless communications, and the computing platform. Let’s enable autonomous mobile robots to perform the text autonomously. And autonomous mobile robots utilize various sensors, for example like LiDARs or 2D or 3D cameras to perceive their environments. And that sensor data is processed in real time by the computing platform. And that sensor data actually profiles the information about the environment. And the robot can use this information to create a map to localize and navigate themselves to the destination autonomously and to perform the task within the environment.

And I’ll just explain—autonomous mobile robots are intelligent machines, and when it comes to develop the next future-oriented autonomous mobile robot, Intel’s® 13th Gen Core processors with congatec’s computer module is an ideal solution. Intel’s 13th Generation Intel Core processors lets—combine the power, efficiency, flexibility, performance, and deliver the boost computing performance prior to previous generations. And it’s a very good solution for a defense robotic computing platform. And MrCoMs right now actually can benefit from this latest Intel processor to run more applications simultaneously and run more workloads and more connected devices.

Christina Cardoza: Yeah, that’s great to hear that you guys are using the 13th Gen Intel processor. I know Intel had just released that. So it’s great to see you guys taking advantage of all the latest technology out there, because, like you mentioned, manufacturers, they have a lot more workloads and they’re very high-intensive for computing, so they want to make sure that they are using the best technology.

Timo, I’m wondering if you can, from Real-Time Systems, if you can also talk about the software architecture that goes into this, and how you help developers and manufacturers meet the real-time demand of these robots.

Timo Kuehn: Yeah, a lot of software goes into AMRs of course; there are various functionalities like perception. The robot has to perceive its environment in order to know what’s going on, localization to find out where it’s situated at the moment; the path planning—the system is autonomous so it needs to find out where to move to. The movement itself, the motion control, is very important; obstacle avoidance of course; interaction with humans, depending on the type of robot and diagnostics.

So a lot of software goes into such a system, and those software functions have to be mapped by the corresponding software modules, and often they have very high requirements or even competing requirements in terms of timing and resource usage. Competing requirements means, for example, if one software module needs a lot of performance, while a different software module needs a deterministic response in a timely manner, you cannot just throw everything in and make it work. It is quite complex.

So typically real-time operating systems are used, or operating systems that have real-time capabilities in order to have party-based scheduling and making sure that deadlines are never missed. Critical tasks like perception or motion control can get higher priority so they don’t get interrupted by lower-priority tasks.

And, especially for motion control, it can be quite challenging. It needs determinism, of course; it needs to react to sensor signals within a predefined time frame. The time frame depends on various factors like: Do we have wheels? Do we have axes? How many axes have to be controlled? What is the speed of the AMR? What precision is needed? Is the device moving in two dimensions or three dimensions? Is the load dynamically added or unloaded? As you can see, there are many different options and it can get quite complex.

Resource allocation and optimization is something that is important and has to be provided by the operating system or the software architecture. And it is necessary to have some moderate design and component-based development for the separation of different functionalities, which makes it easier for independent development, testing, release, and updating of the individual modules. Often third-party code has to be integrated, containers are being used, and not to forget time synchronization between the different modules so we don’t have a lot of overhead or locking so everything works smoothly together.

Christina Cardoza: Great. So it sounds like a lot really goes into all of this. Timo, you mentioned all of these different things to make sure that you have the memory, you have the computing, you have the bandwidth to do all of this. One thing you mentioned that I’m interested in hearing a little bit more about is the different modules that go into this, and taking a modular approach or a concept when developing these next-generation autonomous mobile robots.

So I’m interested in hearing a little bit more about why a modular concept is something that you guys are utilizing. Claire, maybe you can tell me more about it from congatec’s perspective.

Claire Liu: Sure. So, congatec’s computer modules leverage Intel’s processor technology scale seamlessly, from low power to high computing performance, enabling developers to develop their robots to work longer, smarter, and perform complex tasks with higher proficiency and efficiency. Developers right now can actually quickly and easily adapt to the latest Intel processor technologies through a simple module change and add intelligence to their autonomous mobile robot even after years of operation. Additionally, Intel OpenVINO toolkit is offered, which provides comprehensive support for developers and for optimized AI influence models. Intel OpenVINO toolkit simplifies the development of deep learning applications on inter platform.

Christina Cardoza: Yeah, and I think that OpenVINO piece of this is extremely important. You know, like you said, making sure that you can add the intelligence and the deep learning models onto these robots. I know OpenVINO, with its latest releases over the last couple of years, they’ve made it very easy, that they really help developers utilize the hardware that’s available to them in the best possible way.

So, we’ve been talking about the AI toolkit OpenVINO, we talked about Intel processor technologies and the 13th Gen Core. So a lot goes into this to making sure that robots can sense, they can see, they can conduct operations, they can take tasks and orders and things like that. I’m curious, because I’m sure there’s still a lot more that goes into making this possible, Timo, what are the other tools and technologies you’re seeing go into developing autonomous mobile robots? And how can developers take advantage of some of these tools like OpenVINO to really simplify their efforts?

Timo Kuehn: As we just learned, the development of AMRs requires a combination of hardware, software, and connectivity. In terms of hardware, there’s the computing platform, the chassis, the motors, the sensor power systems, and of course what exact sensors and so on is being used depends on the requirements of the application.

On the software side: perception, localization, path planning, motion control, obstacle avoidance, interaction with humans and diagnostics, as we talked about before, play very important roles. And, as you can imagine, integrating and managing all of those functions can get quite complex. So you cannot just add a control unit for each of those functionalities because it has its limitations and of course it hits physical limits very quickly.

AMRs are battery powered, so adding a lot of controllers doesn’t make sense. The controllers need to be connected with each other, that adds weight, increase the size, costs, and complexity. So this is why multiple functions must be consolidated on fewer processors.

And here is where an embedded real-time hypervisor can help a lot, integrating multiple workloads on a single processor, that has many advantages, isolation and security, for example. So, for example, perception, motion control can run in their own virtual machines securely separated from each other, making sure that when one VM needs a lot of load or creates a lot of load, the other one is not affected and can still meet its deadlines.

And this is really crucial. Imagine there’s a signal from a sensor and the reaction from the MR or from the controller comes too late. This can lead to a crash or even to injuries when humans are involved. It helps with performance optimization, load balancing; every VM can get dedicated resources to meet timing and performance requirements.

And the system gets more modular and flexible; you can update and modify various functions individually, especially on Intel processors, because all of the modern Intel processors have virtualization extensions like VTX and VTD. They allow for secure separation of resources and high performance at the same time. And, in addition to that, the Intel time-coordinated computing, the TCC, provides temporal isolation of workloads. This really guarantees the determinism, so you can have both deterministic workloads and performance securely separated in space and time on the same processor.

Christina Cardoza: So, I think we’ve done a great job of laying out for developers and businesses all the tools and technology and components they’re going to need to make AMR development successful. What I would love to hear from you guys next is after they develop an autonomous mobile robot, what it actually looks like in the factory. Where are the biggest opportunities for manufacturers, or what are the biggest use cases they’re using AMRs for today? And where do we still have to go? So, Claire, I’ll start with you on that one.

Claire Liu: Yeah, okay. Autonomous mobile robots have proven to be versatile in various industries, and some use cases, including the material handling I just mentioned in warehousing, in logistics, and, for example, their fulfillment for e-commerce and even collaborative assembling in the manufacturing environment. During the pandemic autonomous mobile robots were utilized for delivering medical supplies and delivering medication and assisting patient care.

Additionally, autonomous mobile robots, right now they are finding more applications in other areas, like in agriculture, in hospitality, and retail. And the possibilities are intensive as the technology continues to evolve. New use cases are consistently emerging.

Christina Cardoza: Yeah, absolutely. And you make a great point: when I think of autonomous mobile robots I think of them in this manufacturing, industrial setting. But, you know, really there are so many other different industries that can take advantage of this technology. Timo, I’d love to hear from you where Real-Time Systems sees the biggest advantages for AMRs.

Timo Kuehn: So, Claire already mentioned optimizing warehouse operations, where AMRs can conduct inventory audits and optimize storage configurations. There are many more use cases, for example in hazardous environments for inspection of, for example, power plants, reducing the risk for human workers. They can be used in public places to provide real-time video feed. Or, for example, in large facilities they can be used in last-mile delivery to transport packages. They can assist in material transportation, also in construction projects. Environmental monitoring is a good use case for AMRs in order to collect data on air quality, water quality, or soil conditions. Hospitality services has been mentioned before, and they can also assist passengers in transportation, guide them, help with mobility challenges, and so on. So there are really a lot of different use cases, and I think it’ll become more in the future.

Christina Cardoza: Yeah, I can expect seeing, in the near future too, more of these AMRs just prevalent in our everyday lives. Just a small example is like in my supermarket there’s a little robot moving around the retail store trying to identify hazards and spills, and I think that’s really the first implementation of it that we’ve been seeing. But it’s only going to get smarter, more intelligent, with all these things we’ve been talking about. And soon we’ll probably see them stocking shelves or helping customers in some other ways.

So, Timo, you mentioned a lot of other different use cases that we can expect or that we’re gearing up for. I’m curious what other advancements or what else do you think we can expect in this field over the next couple of years?

Timo Kuehn: Well, it’s of course hard to predict, but I’m sure there will be many advancements in the near future, especially with Intel processors with integrated AI accelerators. So this will lead to enhanced perception and object recognition, more intelligent path planning and optimization, and, of course, adaptive-learning capabilities. What we can also imagine is improved collaboration between humans and robots. You have things like capability of making complex decisions in real time, for example, to assess situations and execute complicated tasks with only a little human intervention.

So, to summarize: the combination of virtualization technology, real-time capabilities, and integrated AI accelerators has a high potential for completely new types of autonomous mobile robots. They will become more intelligent, adaptable, and capable of performing complex tasks with high precision and efficiency.

Christina Cardoza: Absolutely. Well, we are running out of time. I’m sure we’ve only scratched the surface of this conversation of what these AMRs can do, how we can successfully build them, and what we have to look forward to. But before we go, I just want to throw it back to each of you for any final thoughts or key takeaways you want to leave our listeners with today. Claire, we’ll start with you.

Claire Liu: Okay. As Timo mentioned, we will expect more new and exciting possibilities in the field of AMRs in the near future, and technological development evolving rapidly in the robotic area, with a modular approach to hardware systems and the software-architecture design, autonomous mobile robot companies can adapt to the fast-changing environment and bring their cutting-edge solution to life with great scalability.

Christina Cardoza: Absolutely, Claire. And flexibility—I think flexibility and scalability, like you guys have mentioned, that’s really going to be key, because we are implementing AMRs to meet needs and benefits today. But those—like you said, it’s a fast-changing environment—those needs and those—the technology’s going to advance, those needs are going to change, and we need to make sure that we’re able to develop these systems and future-proof them as we go on.

So I just want to thank you both again for joining us today and for the insightful conversation. I can’t wait to see what else congatec and Real-Time Systems come out with in this space. And I encourage all of our listeners to keep up with them. We’ll put their links in our bio so that you can continue to follow along what’s going on in this space. But, until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

AI Boosts Supply Chain Efficiency and Profits

There is a plethora of possibilities for things to go wrong in logistics operations and supply chains across industries. Take, for example, the energy industry. According to a study from the National Renewable Energy Laboratory, blade failure is the most common breakdown in wind turbines. An average of 40 events per turbine happen each year, causing about an hour of downtime per event, and resulting in costly repairs and lost revenue.

So how do you minimize the risk and maximize profits?

Advances in artificial intelligence can help companies identify where things are going wrong as products are handed off across the supply chain—lessening the chance of failure.

A good example is renewable energy leader Siemens Gamesa, which engaged Relimetrics, an AI-boosted machine vision solutions provider, to inspect its wind turbine blades before they are released to customers.

Using Relimetrics’ AI-based quality automation and non-destructive inspection digitization platform ReliVision, Siemens Gamesa was able to automate the inspection of phased array ultrasonic data, and assess the condition of blades before they’re placed in the field.

“The main challenge of our typical customer is to digitize inspections—which is time-consuming and prone to errors—and improve traceability across their supply chain,” says Relimetrics Founder and CEO Kemal Levi. “With ReliVision, our customers can rapidly implement AI-based machine vision algorithms on their shop floor without writing a single line of code. They can share trained models across inspection points and leverage existing camera hardware irrespective of image modality.”

Gaining Tracing Quality Across the Supply Chain

Traceability of product quality is the most essential bit of information with the supply chain, Levi explains.

“If you think about the supply chain, information is flowing from one location to another at a holistic level,” he says. “There are many parties involved, and it is important for anyone to track quality information at different points along the way.”

Tracing the quality status of parts or products from a multitude of suppliers to a manufacturer is one of those points. As parts move along the supply chain, quality automation helps identify anomalies before they get to the customer and risk downtime.

“For OEMs (Original Equipment Manufacturers), it is really important to be able to trace quality across its suppliers and to run data analytics to see which one is actually performing better,” says Levi. “Then they can weed out those vendors that are not up to par.”

As parts move along the #SupplyChain, quality #automation helps identify anomalies before they get to the customer and risk downtime. @relimetrics via @insightdottech

Improving the Bottom Line with Quality Automation

As manufacturers ship products to their customers, they must identify issues with outbound transportation and logistics.

“A magnifying lens looking at different points of the supply chain will give better visibility to an OEM,” says Levi. “At the end of the day, this is an effort to improve margins. And in the case of the energy sector, margins are razor thin as integrators put pricing pressure on suppliers.”

Maximizing the number of items getting to the end of the manufacturing line that meet the required quality standards has a direct impact on the bottom line. Similarly, in the electronics and automotive sectors, the pressure on prices continues throughout the supply chain. As OEMs put pressure on suppliers to reduce prices to maintain their margins, every item that can be produced with quality—and not be subject to scrap and rework pre-shipment or warranty claims post-shipment—will benefit the top-line revenue, bottom-line costs, and overall profit margins.

Traditionally, the supply chain has relied on manual inspections and sorting. But this process can be labor-intensive and prone to error, adding costs to the loss. Today, AI-driven quality automation tools like ReliVision can be deployed without requiring any programming skills or prior machine-learning knowledge, offering access to real-time information that can improve efficiency and visibility.

An industrial-grade image and video analytics solution, ReliVision can integrate with any industrial-grade camera hardware and NDT setup to analyze data streams of inspected parts or products as they travel through the supply chain. The data feed is streamed to a connected IT system that leverages Intel® processors and Intel® Movidius Vision Processing Units, digitizing inspection of the product.

If an anomaly is detected, an alert is sent to the operator to pull off the faulty part or product, or fix the issue before shipment.

Integrating Supply Chain Data with ERP Systems

ReliVision also integrates with enterprise resource planning (ERP) systems. Correlation of acquired data across the product life cycle—from manufacturing to sales to service—enables continuous business intelligence.

“ERP has the supply chain management component, as well as manufacturing information, project management information, sales and marketing information, service management, business intelligence, and big-data analytics,” says Levi. “The advantage of being able to connect to ERP is that you can derive additional insights to make better supply chain decisions.”

Automation throughout the supply chain is critical. “These are the people who can rapidly make decisions that can save thousands, if not millions, of dollars and improve margins for the company,” says Levi. “In the case of a recall, we are able to document the quality of every product when it actually left the factory floor.”

“At the holistic level, quality automation is about risk analytics to help ensure a good customer experience while lowering operational costs,” he continues. “In the end, a company that can trace quality in real time and do a better assessment on where quality issues originate, can ultimately boost profitability.”

Smart Mirrors Reflect a New Era of In-Store Retail

Typically, when your customers walk into a dressing room to try on a new outfit, they end up bringing an armful of options—different styles, colors, and sizes. What if they could avoid the hassle of searching the racks and having to try on all these choices? With a smart, interactive mirror they can. Powered by AI, smart mirrors can be another layer of customer service virtually by coordinating items, different sizes, and colors from real-time inventory. And shoppers can find out if the outfits they want are in stock or available online.

This kind of solution—like the Polytouch Magic Mirror from Pyramid Computer, a full-service hardware solutions provider—is available today. The platform helps retailers offer a unique experience for customer engagement with the benefits of online shopping, combined with the ability to see and touch the items up close and personal.

Personalized Retail Experiences at Brick-and-Mortar Stores Goes Virtual

Case in point is a large sport fashion retailer, which created a smart fitting room using the Polytouch Magic Mirror. The retail chain deployed the solution—a mirror equipped with an HD display, 10-point touchscreen, and antenna for RFID-based item recognition—all powered by a small form factor PC—in its fitting rooms.

The solution uses RFID scanning technology rather than a camera, which customers clearly don’t want in their dressing room for privacy reasons.

The scanner senses which items are brought into the space, using the data to suggest coordinating accessories and inform the customer if and where alternate options are available. The seamless link between in-store and online stock information provides an “endless aisle” customer experience. At the same time, it allows the retailer to draw more traffic to their stores, gain new insights, overcome staff shortages, and lower operating costs.

“The retailer can optimize tasks given to store clerks, instead of having to send them to the fitting room to consult with customers and find items,” says Anthony Hunckler, Head of Marketing & Design at Pyramid Computer.

“The #software and #hardware elements must work together to provide this flexibility and fluidity on UI and #UX.” – Anthony Hunckler, @polytouch_de via @insightdottech

The RFID reader communicates with back-end software, providing customers with information—and retailers with valuable data. “The software is the element that gives the final brand engagement with the customer, such as high-quality product pictures and media. The software and hardware elements must work together to provide this flexibility and fluidity on UI and UX,” explains Hunckler.

Retailers that already have RFID systems in place for stock management will find it easy to implement the Magic Mirror. In that case, their back end is ready to add this mirror to scan the product. For maintenance, Pyramid includes a warranty with a high level of service. “If you have a problem, we can switch out the system and display very easily. From that perspective, there’s almost no risk for our partners,” says Hunckler. The solution’s Intel-based PC provides rugged 24/7 dependability, important for deployment in a retail setting.

Insights on Sales, Customer Preferences, and Logistics

Because it interacts with customers and collects data about their choices and preferences, the Magic Mirror solution offers a chance to gain valuable in-store retail insights.

Depending on the software retailers choose or develop, they can gather and analyze in-depth sales data, discern customer preferences, optimize logistics and inventory management, and cross-sell related items. “Analytics are very important for retailers to optimize their stock based on real-time data about customer habits,” says Hunckler. “For example, if you see that 80% of the t-shirts you sell are white, you’ll know that you need more white shirts in stock and fewer of the other colors.”

Using that data, traditional retailers can make accurate predictions about how many items they need to have in stock, when, and where, helping to inform logistical decisions and keep up with demand.

Granular-level insights will help retailers prepare for the rapidly changing future of the brick-and-mortar retail industry. “The retail structure we’ve known in the past is not as relevant today,” says Hunckler. Some customers still arrive at a store to browse, try on clothes, and make decisions in the traditional way—but many others come to a physical store only to pick up items they have pre-ordered online. If retailers can foster a link with their online customers through enhancing shopping experiences available only in-person, like special offers and perks, those customers will perceive value in visiting physical stores, and retail stores can gain brand loyalty.

Noting customers’ individual preferences and needs is a crucial element to success, Hunckler says. “Customers are unique. Some like to have support from a clerk, while others don’t want that attention. Retailers need to concentrate on providing plenty of digital support and personalizing the entire experience.”

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

This article was originally published on August 16, 2023.

AI Retail Analytics Enhance Inventory Management

Customers expect to walk into a store and find what they need. When that doesn’t happen—whether it’s due to rampant labor shortages or inefficient manual stocking processes—they will take their business elsewhere.

But retailers cannot afford to lose customers due to out-of-stock or misplaced items. A recent report found that worldwide, this can result in as much as a $984 billion loss in sales every year. The problem, though, is that many retailers still rely on manual processes to keep shelves stocked. That’s why more and more have started to turn to AI retail analytics solutions to streamline their inventory management practices.

Optimizing Customer Experience with AI Retail Analytics

Take the UK-based retail grocery chain Nisa, for example. Nisa found that not having items properly stocked was having a negative effect on their customer experience. To improve the situation, they turned to Shelfie, a retail analytics platform provider that relies on cloud-based software, to improve their processes and better understand the movement of stock.

With Shelfie, Nisa can take photos with connected cameras in its stores and compare the current stock against a predetermined chart of what each monitored shelf should look like.

To accomplish this, the cameras take video images and transfer them to the cloud, where an advanced machine learning and image processing algorithm analyzes data about stock placement and availability. When an item is running low or products are out of place, staff members receive a real-time alert via their dashboard or mobile app. Alerts can come to a barcode scanner, tablet, or other connected device.

“#Retailers have so much on their hands, and every day brings new challenges. This solution can provide the #data and insights they need to stay ahead and know what to prioritize” – Yehia Oweiss, Shelfie via @insightdottech

In a trial at one of the Nisa stores, Shelfie was able to keep stock availability at approximately 95%. “It provides all the data that I need as a retailer to make decisions about buying, forecasting, and optimizing the positioning of goods within a store,” says Rav Garcha, Owner and Operator of several Nisa locations.

Making Human Efforts More Efficient

With the rapid growth of technology in every industry, it’s surprising to find that most retailers still rely on a human to walk through the store, look at the shelves to see what’s available and what’s running low—and then go into the warehouse or back storage to fill a trolley and replenish the stock, according to Yehia Oweiss, CEO of Shelfie.

Shelfie was developed to ease the burden on human retail workers, making it easy to monitor shelves and gain meaningful insights. “This solution doesn’t replace store personnel—it’s designed to make their jobs more efficient,” Oweiss explains. “Our software will tell you the time and the day you are most out of stock and on which shelf. Now if you have that data to hand, you can deploy the person who’s in charge of replenishing the stock more efficiently.”

Video 1. Shelfie consists of an analytics platform, and image capture device and a reporting dashboard to provide insights into shelf inventory management. (Source: Shelfie)

While similar solutions are complex or costly to scale, Shelfie is simple to deploy and easy to expand. The solution handles all the analytics and alerts store operators of any out-of-stock or low items.

“All you need is a camera pointing at the shelf, connected to the internet,” explains Oweiss. “We do the rest remotely in the cloud.”

The solution was also developed to be camera-agnostic, enabling retailers to use existing security cameras as long as they can connect to the internet. In addition, there is an option to set up the platform on-premises if needed.

To implement AI models capable of detecting what’s happening on the retail floor, Shelfie utilizes the OpenVINO AI toolkit. When a new customer decides to adopt the solution, they provide photos and information about what each shelf should look like, kicking off a two-week training process for the AI neural engine.

AI Retail Solution Addresses More Store Needs Over Time

As the AI software learns the location of various items and tracks their sales, Shelfie can provide many additional insights beyond basic out-of-stock or misplaced item alerts.

For example, dashboards display data about which SKUs are highest- and lowest-selling and which are most often out of stock. Managers can see how long an item has been out of stock and what time of day certain items tend to sell the most and least as well as what are the top-selling items they need to ensure are never out of stock. Granular data like this would be very difficult—if not impossible—to gather using manual processes, and can provide important insights to help optimize sales and stocking efforts, according to Oweiss.

“At the end of the day, what Shelfie does is improve business processes,” he explains. “It has a very good effect on customer satisfaction. Retailers have so much on their hands, and every day brings new challenges. This solution can provide the data and insights they need to stay ahead and know what to prioritize.”

The solution’s use cases extend beyond the retail realm. For example, Oweiss says Shelfie is being introduced in gas stations to monitor spillages, count customers, and maintain regulatory compliance. The company is also implementing the solution in the oil and gas industry to monitor oil heads for leakage and spillage. “Every use case is about business process efficiency,” Oweiss says. “When you have key data at hand, you can deploy humans more efficiently and maximize their effectiveness.”

As use of retail analytics expands, Oweiss sees augmented reality and AI playing ever-increasing roles in future solutions. For now, while large chains and companies with many locations often struggle with the change management needed to succeed with a new solution, smaller companies and independent retailers are adopting solutions like Shelfie with ease.

“For them, flexibility and cost savings are advantages,” says Oweiss. “Our solution doesn’t require specific camera models or high-definition images, so they can get up and running quickly with the technology they already have in place.”

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.