Transform the Industry with 4th Gen Intel® Xeon® Processors

Earlier this year, the latest 4th Gen Intel® Xeon® Scalable processors, codenamed Sapphire Rapids, were released. These processors come packed with great new features for better, faster performance, including through leveraging AI, leading to improved industrial operations all around. They also hit all the marks for federal and Industry 4.0 success—security, reliability, flexibility, and scalability—everything that the industrial and federal spaces want to see. And because these industries don’t favor the “rip and replace” method, these Xeon Scalable processors will help manufacturers future-proof their investments, meeting the needs their factories have today while also looking forward to tomorrow.

We’ll get an inside look at Sapphire Rapids, the 4th Gen Intel Xeon Scalable processors (Video 1), with Christine Boles, Vice President of the Network & Edge Group and General Manager of Federal & Industrial Solutions at Intel, and answer the key question: “Why should I move from the release I’m on now to this one?”

Video 1. Christine Boles, VP of the Network & Edge Group and GM of Federal & Industrial Solutions at Intel discusses industrial and federal use cases of the latest 4th Gen Intel® Xeon® Scalable Processors. (Source: insight.tech)

Tell us more about these latest Intel® Xeon® Scalable processors.

The 4th Gen Intel Xeon Scalable processors have been designed to deliver incredible capabilities for very demanding workloads, which is exciting to see. And, as you mentioned, many of those workloads are in the industrial space. If you look at Industry 4.0 transformation, the industrial sector is looking for technologies to deliver capabilities that really extend and increase business value, as well as addressing some of the challenges of manufacturers or utilities.

The new capabilities of these processors are particularly in the areas of acceleration for AI and machine learning and data analytics, in addition to networking and storage. Intel has re-architected the microarchitecture to address these workloads—whether in the networking space or at the industrial edge—extending processing capability, but within a great performance-to-power performance area. At the same time, the 4th Gen also extends memory capacity in the IO that is needed in these industrial capabilities.

One of the specific areas where we have added capabilities is around deep learning and machine learning, by putting some additional acceleration into the CPUs. Two of the new updates around acceleration are the Intel® Advanced Matrix Extensions (Intel® AMX), and the Intel® Data Streaming Accelerator.

As you can imagine, manufacturers in the industrial space have a lot of data to deal with. AMX accelerates AI capabilities for the workloads in those industrial spaces—such as machine vision; defect detection; or quality assessment of the equipment, as well as of products moving down the line. And the Data Streaming Accelerator prioritizes and manages the data through the virtualized environments, as well as presenting the information.

Another area that I’d call attention to is the Intel® Speed Select Technology, or SST. SST helps with consolidating workloads onto form factors that are running multiple workloads. It has the ability to select where you’re going to be processing—optimizing performance where you need it in some of the virtual machines versus other workloads that might not need as much. And I’m excited to see how SST will be utilized by the industrial solutions providers.

How does this latest generation perform compared to previous generations?

There are four areas where I really see that the 4th Gen Xeon Scalable processors are going to help deliver what customers are looking for around the IoT edge, and what solutions providers will develop around.

The first is in the area of overall performance, memory, and IO. In the overall architecture we have a higher per-core performance than previous generations had, with up to 52 cores, or different sockets, for a range of IoT-edge use cases. We have also extended memory capabilities, with eight channels of DDR5. DDR5 allows for an overall 1.5x improvement in bandwidth over the DDR4 generation. This will ultimately improve performance and capacity for memory utilization.

One of the areas that pushes limits in industrial use cases is IO capability, and this generation has up to 80 lanes of PCI Express Gen 5. Also in the IO area: We have great acceleration of AI capabilities in the Xeon Scalable processors—but if you need additional CPUs or external accelerators, we do have the CXL 1.1 connectivity for interconnecting to external devices.

The second area is one of the biggest additions to this product—in AI acceleration with those AMX extensions. And we take it one step further in making sure the right toolkits are available to take advantage of the capability for workload inferencing and optimization with the OpenVINO toolkit. Having both that improved AI acceleration and the toolkits will give customers the right support for deep learning and overall training of workloads.

I mentioned SST previously—the Intel Speed Select Technology—for bringing workloads together. This is the third area, and it allows for better control over CPU performance, and how that performance and the compute power are being utilized across the Xeon Scalable processors. We also make available tools to allow for monitoring control with the Intel® Resource Director Technology toolkit, which enables for control and sharing of resources, as well as managing the overall environment.

The fourth big area is, of course, resiliency and security, which are particularly important in the manufacturing or federal types of environments. Intel is known for efficiency and resiliency with its processors, so that’s a big part of what we continue to provide. And then we have security extensions, with the Software Guard Extensions, to allow for secure enclaves of execution of different applications.

“The new capabilities of the latest Intel® Xeon® processors are particularly in the areas of acceleration for #AI and #MachineLearning and #data analytics, in addition to networking and storage” – Christine Boles, @intel via @insightdottech

What use cases will benefit most from the latest processors?

The first big area is, of course, within the industrial and federal spaces—use cases that have a high demand for compute, whether that’s machine vision kinds of applications or detecting defects and taking action on them.

There’s an emerging area around digital twin capabilities, where you have both visibility and the ability to have a representation of what is happening on the factory area. It’s then expanding into evolving areas around automation. One example is within the utility space with the modernization of the grid, bringing greater levels of software-defined capability, as well as management of the grid infrastructure or process automation.

Another big area is machine vision. How can manufacturers improve the detection of defects or of quality inspection from a range of cameras? It’s a question of gathering that information in, accurately analyzing it, and then acting upon the data that the images brought in. The capabilities we’ve built into the Xeon Scalable processors with the AMX extensions will really allow for these workloads to be processed and managed.

The same kinds of improvements we have in industrial spaces, you can also think about them being utilized in the consumer-focused industries of the retail environment—the hospitality types of spaces. Over the past few years there’s really been a change in what is available to go into stores or hotels, with self-checkout kiosks, for example. Having a 4th Gen Intel Xeon Scalable processor-based solution, with its additional AI and analytics capabilities, will allow for new capabilities for consumer interaction as people enter a store, as well as assessing any preferences that those customers may have. And, of course, it can be helpful on the back end as well, with robotics for assessing logistics in the warehouse and the back room, and managing the overall inventory.

The last area I would mention is one that has even more opportunity than most—the healthcare and life sciences area. That space could really utilize the AI extensions and support that have been built into these 4th Gen Xeon Scalable processors to assess images or to do advanced analysis on genomics and sequencing. It’s going to be exciting to see how medical-equipment manufacturers utilize some of the new capabilities we’ve put into these processors.

What role will they play as more networks and workloads move closer to the edge?

One of the exciting parts of the next-generation platform that we’ve been working on with these Xeon Scalable processors is the range of workloads that it enables. One of these is this shift from what have traditionally been more fixed-function network architectures to what is evolving into more of a software-based, virtualized network environment. The Xeon Scalable processors’ capability allows solutions providers to have more of that software-defined network environment, while at the same time utilizing AI and machine learning capability for the information that’s flowing through the network and optimizing it.

Is there anything else you’d like to add?

The 4th Gen Intel Xeon Scalable processors really help not only with performance and security and in all the other ways I’ve already discussed, but we’ve also kept in mind what is needed to support reliable use in ruggedized environments. We offer a broad range of SKUs of these processors that have been specifically built to cater to the long-life and reliability needs of an industrial-commercial offering, including ones available for a temperature range of 0°C–84C°. I mentioned the range of cores and performance that the Xeon Scalable processors bring; that’s also reflected in the range of SKUs, which can be scalable to answer the question of what it is that the workload really needs.

Bottom line, we have ensured that solutions providers utilizing these 4th Gen Xeon Scalable processors will have the capabilities they need in performance acceleration for AI workloads or networking workloads and analytics, but also have the range of power and performance that they need for the environments they’re going into. I’m really excited to see the applications that come to market based on this new generation.

Related Content

To learn more about the 4th Gen Intel® Xeon® Scalable processors, listen to The Power of the 4th Gen Intel® Xeon® Scalable Processors and read Intel Boosts Edge Productivity with Processor Innovations. For the latest innovations from Intel, follow them on Twitter and LinkedIn.

 

This article was edited by Erin Noble, copy editor.

Dive Into the Power of the 13th Gen Intel® Core™ Processors

In January, Intel launched the desktop and mobile version of its 13th Gen Intel® Core processors, codenamed Raptor Lake. Packed with impressive new features and capabilities, the 13th Gen builds on the strengths of the performance-hybrid architecture seen in the 12th Intel® Core processors, as Intel continues to support the growing trend toward the intelligent edge. 

Jeni Barovian Panhorst, Vice President and General Manager of the Network and Edge Compute Division at Intel, walks us through the details of the 13th Gen release (Video 1). She’ll explain why its multitasking power is so well suited to the intensive workloads that are the norm in industries from healthcare to hospitality, and in specific use cases like autonomous robots. Once again, Intel aims to better its best, and to support its partners and clients in doing the same.

What makes this release so exciting for the network and edge markets today?

The 13th Gen Intel Core processors for the IoT edge are our top choice for maximizing performance, memory, and IO and edge deployments. And I also want to highlight the mobile version of the 13th Gen Intel Core processors, which is focused on combining power efficiency, performance, and flexibility with industrial-grade features that cater specifically to areas that are important for network and IoT edge—including AI, graphics, and ruggedized edge use cases.

It delivers a boost in performance compared to the prior generation, while also offering a range of options for different power-design points. This allows our customers to get exactly the performance per watt that they’re looking for in deployments that are often space and power constrained. They benefit from higher single-threaded performance, higher multi-threaded performance, graphics and AI performance, but also increased flexibility to run more applications simultaneously, more workloads and more connected devices—all of which are very critical at the IoT edge.

Our performance-hybrid architecture offers up to 14 cores and 20 threads, and we have a technology called Intel® Thread Director that allows us to match the cores specifically to the needs of our customers’ workloads. We also have really great graphics performance, which is essential at the edge for use cases like autonomous mobile robotics and optical inspection. When you combine that with the capabilities of the processor—with technologies like Intel® DL Boost with VNNI instructions—and with developer tools like the Intel OpenVINO toolkit, it all creates an opportunity to further enhance AI-inference optimization, which helps reduce dependence on external accelerators.

This is also the first generation of mobile processors to introduce PCI Express Gen 5 connectivity (it was previously available on the 12th Gen Intel Core processors). This allows our customers to focus on deploying more demanding workloads in more places, because there’s a much bigger data pipeline, as well as the ability to provide faster, more-capable connections to a variety of different peripherals.

And, last but not least, the 13th Gen Intel Core mobile processors are also focused on redefining industrial intelligence by bringing flexibility, scalability, and durability to the edge. Select SKUs in the portfolio are compliant with industrial-grade use cases for stringent environments—for example, they offer extended temperature ranges of -40 °C to 100 °C. They also support in-band ECC memory to improve reliability—delivering the type of performance and capability needed in harsh environments for installations in areas like machine control, AMR, avionics, and other exciting use cases for the IoT edge.

Video 1. An in-depth look at the 13th Gen Intel® Core Processors with Jeni Barovian Panhorst, VP & GM, Network & Edge Compute Division at Intel. (Source: insight.tech)

Can you talk about how Intel is utilizing its hybrid microarchitecture?

We introduced the performance-hybrid architecture in the 12th Gen processors. It’s really about bringing together the best of two Intel architectures onto a single SOC: our Performance cores, or P-cores; and our Efficient cores, or E-cores. The primary advantage is in scaling up multi-threaded performance by using these P-cores and E-cores optimally for the workloads at hand.

That performance scale-up is dependent on how efficiently a given application is divided into multiple different tasks, and the number of available CPUs to deliver the parallel execution of those tasks. To cater to what is a vast diversity of client applications and usages of cores, we focused on designing an SOC architecture in which the larger cores are utilized in performance to go after single-threaded performance and limited-threaded scenarios. The Efficient cores can simultaneously help extend scalability of multi-threaded performance over prior processor generations. So, the performance-hybrid architecture achieves the best performance on multi-threaded workloads, as well as on limited-threaded and power-constrained workloads.

And that performance-hybrid architecture is coupled with the Intel Thread Director that I mentioned before, which optimizes performance for concurrent workloads across these P-cores and E-cores. It monitors the instruction mix in real time, and dynamically provides guidance to the scheduler in the operating system, allowing it to make more intelligent and data-driven decisions about how to schedule those threads.

And so performance threads are prioritized on the P-cores, delivering responsive performance where maybe there aren’t as many limitations in terms of power requirements. And then the E-cores are utilized for highly parallel workloads, and other power-constrained conditions where power might be needed elsewhere in the system—such as the graphics engines or other accelerators in the platform. This combination then delivers the best user experience.

“The 13th Gen Intel Core processors for the #IoT edge are our top choice for maximizing performance, memory, and IO and #edge deployments” – Jeni Barovian Panhorst, @intel via @insightdottech

What are some of the top improvements over previous generations?

Performance is always top of mind for people. If we look at performance gains within the same power envelope for the mobile family of products, we’ve got up to 1.08x faster single-threaded performance. In the desktop processors we have up to 1.34x faster multi-threaded performance. And if we look specifically at AI performance, which is so critical for the edge, we’ve got up to 1.25x gains in CPU-classification inference workloads.

Another area that’s important to our customers is an easy upgrade path. And so these 13th Gen processors are socket compatible with the 12th Gen Intel Core processors. I mentioned earlier PCI Express Gen 5 conductivity—it’s the first generation of our mobile products to include PCIe Gen 5 to deliver a faster pipeline for more data throughput. A great use case example benefiting from that would be medical imaging, which requires a tremendous amount of visual data.

A specific customer example where we’re seeing improvements gen-on-gen from 12th Gen to 13th Gen is with Hellometer, a great example of a company that is digging into those gen-on-gen performance gains, while achieving platform flexibility at the same time.Hellometer has an SaaS solution specializing in AI for restaurant automation for fast food and quick service restaurants, and the 13th Gen is capable of delivering more AI performance at the edge, more cost effectively, for their target market.

In these restaurants time is truly of the essence; it translates directly to revenue, because if a line is too long guests will simply drive past. So these brands are really focused on utilizing Hellometer’s computer vision-based technology, and that had been using our prior generation of Intel Core mobile processors with the built-in AI acceleration in the processor itself.

But at the launch for the 13th Gen Intel Core processors, the CEO of Hellometer talked about how the 13th Gen will enable it to add an extra video stream to its solution, which will in turn increase its ability to process customer data by over 30% for real-time inferencing, without a discrete AI accelerator. It gives our customers the ability to win business by better understanding their guests’ experiences, and it delivers innovations that really drive business value.

How will these processors provide new opportunities as we move closer to the edge?

There’s just an incredible breadth of use cases we’re supporting. If we look in military applications there are opportunities to support embedded computing for vehicles and aircraft, or edge devices for intelligence and safety and recon. There is also next-generation avionics, with multitasking performance and durability requirements for space-constrained and stringent-use conditions. There are healthcare advancements—if you look at enabling ultrasound imaging, endoscopy, clinical devices—again, there’s a massive amount of visual data that has to be processed there. Then there’s hospitality, as in the Hellometer example. And all kinds of other applications as well, including video walls and digital signage; AI-driven, in-store advertising, interactive flat-panel displays—these can all take advantage of our 13th Gen Core processors.

Industrial applications, like AI-based industrial process control, can leverage the 13th Gen Intel Core processors to converge powerful compute and AI workloads in situations with space and power constraints. An example in this area is our partner Advantech, focusing specifically on AMRs—autonomous mobile robotics—which are truly becoming the new normal in warehousing, logistics, and manufacturing environments.

AMRs, and other computer vision applications, are really challenged by the need to provide powerful AI and camera-based inputs, but in very small form factors. AMRs, in particular, may need to process data from multiple different cameras, as well as proximity sensors, so that they can navigate safely around their environments. This market is growing incredibly quickly, so the question becomes about addressing that opportunity, and enabling our customers to extract that value.

Advantech has a couple of offerings that leverage the 13th Gen Intel Core mobile processors to address what’s needed for compute and graphics-processing performance, and also power-efficiency needs. And each of these solutions benefits from the fact that you can get adaptive performance from these mobile processors featuring the performance-hybrid architecture, but also intensive graphics processing from integrated Intel® Iris® Xe graphics, as well as memory support from DDR5. And it is certainly benefiting from the really great power efficiency of the latest processor generation—longer battery life boosts operational duration of robots on the factory or warehouse floor. And that improves the total cost of ownership.

Are there any final thoughts you’d like to leave us with today?

Our mission is to deliver the hardware and software platforms that enable infrastructure operators and enterprises of all types to adopt an edge-native strategy, delivering workload-specific performance and leadership performance for our customers at the right power and design points. And that means meeting all design points across the spectrum—whether we’re talking about the devices themselves, the edge infrastructure, the network infrastructure, or the cloud.

We’re also focused on being a catalyst for digital transformation and business value, on driving and democratizing AI, and making it accessible across the full ecosystem. With this latest release we’re really proud to be delivering the next generation of diverse, edge-ready processors, and giving our customers more choices in leveraging this hybrid microarchitecture to unlock all these possibilities. It’s the promise of a future that’s built on AI-enabled edge computing.

Related Content

To learn more about capabilities and hybrid microarchitecture of 13th Gen Intel® Core Processors, listen to An Inside Look at the 13th Gen Intel® Core Processors and read Intel Boosts Edge Productivity with Processor Innovations. For the latest innovations from Intel, follow them on Twitter and LinkedIn.

 

This article was edited by Erin Noble, copy editor.

What to Expect from Hannover Messe 2023

The foremost global conference for industrial technology, innovation, and automation—Hannover Messe (HMI)—is back with a hybrid event April 17 to 21.

The motto throughout the show is “Making the Difference” toward a sustainable future. As a result, you’ll find dozens of Intel® Partner Alliance members showcasing next advancements in smart manufacturing/Industry 4.0—encompassing the move to the industrial edge, artificial intelligence and machine learning, energy management, and carbon-neutral productions.

Highlights you can expect from the event include:

A series of live demos from technology solution provider Dell Technologies, featuring robotics, AI at the edge, and 5G. Dell will demonstrate progress it is making in these areas with its partners Siemens, Ericsson, Nexalus, and GRC through leading-edge hardware and solutions such as the Dell PowerEdge XR4000, new Dell Gateways, and the latest 16G servers.

The company will also display the new Dell Validated Design for Manufacturing Edge, designed to accelerate digital transformation in the factory through IT and OT convergence, deploying new technologies, and protecting edge assets. To demonstrate its latest Validated Design at the event, Dell will have a mock brewery and production line set up with its ISV partners Telit, Cognex, XMPro, and Claroty.

Dozens of Intel® Partner Alliance members will showcase next advancements in #SmartManufacturing/#Industry40—encompassing the move to the industrial edge, #AI and #MachineLearning, energy management, and carbon-neutral productions. @hannover_messe via @insightdottech

NEXCOM International’s subsidiaries NexAIoT and NexCOBOT return to Hannover Messe after a four-year hiatus. The companies will discuss how to empower environmental, social, and governance (ESG) with IoT, improved autonomous mobile robots, and next-generation industrial PCs.

In addition, the companies will show how they tackle the ongoing industry-wide labor shortage challenge. They will highlight their improved OT and IT systems, seamless integration of data centers and digital factories, and a x86 functional safety control platform designed to help guide AMRs in the smart factory.

Cloud computing company VMware plans to showcase its partnership with Intel through an Intel® Robotics Vision & Control (RVC) demo. The company will exhibit how to consolidate a robotic motion planning/control and perception workload to the edge/cloud on a single Intel platform.

You can also expect a FlexRAN demonstration from Canonical, featuring the latest Ubuntu Pro LTS release with full support for the 4th Gen Intel® Xeon® Scalable processors and Intel® vRAN boost hardware platform.

Other exhibitors at the event include HPE, which will display its GreenLake edge-to-cloud solution designed to speed production, optimize operations, and improve business outcomes. And Phoenix Contact, which will demonstrate its PROFINET over Time-Sensitive Networking (TSN) solutions.

See how you can be part of “Making the Difference” by registering for Hannover Messe 2023 today. If you can’t make it in person, there will also be livestreams, digital exhibitors, and online networking opportunities for you to explore.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

embedded world 2023: Modular Building Blocks at the Edge

So, you’ve got an idea for a new intelligent edge system. Maybe you just finished modeling an AI-powered robot prototype, or maybe you’ve already completed an industrial computer vision proof of concept. Now the real work of specifying components and building an embedded system that meets your design requirements begins.

Where do you start? Nearly 27,000 of your colleagues began their embedded solution development journey at embedded world 2023, where modular vision, AI, and workstation building blocks from Intel and the Intel® Partner Alliance put them on the fast track to next-gen system deployment.

Here’s a snapshot of what you missed at the show.

Intel® Arc Graphics Gives Modular Vision Tech a Boost

Computer vision technology is now mainstream and popping up in places you’d never have expected it a few years ago. From retail kiosks to autonomous robots—embedded solutions engineers now need a way to quickly add, upgrade, and scale their vision-equipped embedded system designs to meet market demand.

The challenges of doing so at the edge are well documented—high-performance vision technology comes with power, thermal, and cost constraints in embedded systems that are already resource-limited. Intel® Arc graphics processors were developed to help device engineers thread this needle. Companies like global leader in edge computing ADLINK Technology, embedded and automation solutions provider Advantech, and embedded computing technology provider Kontron have already delivered plug-in hardware solutions that streamline their addition to any design.

From #retail kiosks to autonomous #robots—embedded solutions #engineers now need a way to quickly add, upgrade, and scale their vision-equipped embedded system designs to meet market demand. @embedded_world via @insightdottech

At embedded world 2023, Kontron’s Thomas Stanik, a Senior Sales and Business Development Manager, unveiled a collaboration with Intel that brings the new Arc GPUs into embedded workstations at the edge (Video 1). Designed for factory automation, medical imaging, and other environments that require high-performance vision processing in compact power and thermal envelopes, the liquid-cooled embedded workstation is built around a long-lifecycle-supported K3851-R industrial ATX motherboard outfitted with 12thand 13th Gen Intel® Core processors and either the Arc A40 or A50 GPU.

Video 1. Embedded workstations are an ideal entry point for Intel® Arc GPUs in industrial environments. Shown here is a liquid-cooled workstation proof of concept based on latest generation Intel® Core processors and the new GPUs. (Source: insight.tech)

With that much performance so close to the industrial edge, developers can create immersive human-machine interfaces for operators that still pack enough horsepower to offload vision processing tasks from nearby camera platforms.

Elsewhere at the show, Advantech broke down how developers can leverage the new GPUs to match the demand for data analytics at the edge using modular building blocks. For instance, the company’s portfolio includes single-board computers (SBCs), computer-on-modules (COMs), and plug-and-play solutions that accelerate the design and deployment of intelligent edge solutions.

From platforms designed around Intel embedded processors with integrated Intel® Iris® Xe graphics to high-performance COM-HPC modules to discrete PCIe 4.0 or MXM cards equipped with Arc GPUs, Thomas Kaminski, Director of Product Sales Management, Marketing, and Technical Support at Advantech, explained the options for scaling edge analytics and vision performance across new or existing designs.

At the ADLINK Technology booth, the company’s Head of Modular Solutions, Henk van Bremen, introduced showgoers to the MXM-AXe, a VR-ready MXM 3.1 Type A module based on an Intel Arc GPU. The module provides up to eight Xe cores, ray tracing units, and an AI engine for driving as many as four 4K displays (Figure 1). It also equips 4 GB of dedicated onboard GDDR6 memory and, importantly, uses 8x PCIe Gen 4 interfaces carried out over a 16-lane, 314 contact connector for quick system integration.

The ADLINK Technology MXM-AXe module is built around an Intel® Arc™ GPU that delivers eight Xe cores, ray tracing units, and an AI engine, and can slot into COM Express systems over PCIe Gen4 links
Figure 1. The ADLINK Technology MXM-AXe module is built around an Intel® Arc GPU that delivers eight Xe cores, ray tracing units, and an AI engine, and can slot into COM Express systems over PCIe Gen4 links. (Source: ADLINK Technology)

The company was showcasing the MXM-AXe alongside its COM Express Type 6 Rev. 3.1 development kit that the graphics module can easily slot into. This architecture not only consolidates component sourcing and procurement down to a couple of modules available through a single vendor, but it also unifies the software stack around x86 devices.

Completing the Vision at embedded world 2023

Of course, building block embedded hardware is just one step on the way to completing a design. With that foundation in place, work with embedded software and tools can begin. And Intel partners like embedded computer modules supplier congatec and IoT solution developer SECO were showcasing end products built on Intel solution stacks at the show.

The first of these, presented by Christian Eder, Director of Product Marketing at congatec, consisted of a demonstration of an AI- and vision-enabled robotic pick-and-place machine running on a hypervised multicore Intel Core processor implemented in a COM-HPC module (Video 2). A real-time hypervisor from the company’s affiliate Real-Time Systems GmbH partitioned the cores so that image analysis and AI workloads that let the robot sense its environment are deployed on a Linux operating system while control and actuation functions are executed on a separate RTOS running on the same chip.

Video 2. Real-time hypervisors allow multiple operating systems to run on the same chip so embedded engineers can maximize resource utilization and minimize cost in systems like AI-powered robotic pick-and-place machines. (Source: insight.tech)

The AI and vision portions of the system are powered by the OpenVINO toolkit, which facilitates the “playful” capabilities of the autonomous robot without the need for discrete graphics processors.

But intelligent vision processing is also available on entry-level devices. This was demonstrated by SECO’s Chief Product Officer, Maurizio Caporali, in a retail kiosk on the embedded world show floor. Caporali showcased a dual-core Intel Atom® x6000 processor available on SMARC or COM Express modules that accepts video inputs from a single camera, then applies both facial recognition and emotion detection algorithms against the stream.

This type of AI can be developed and deployed onto SECO’s range of Intel-based edge hardware using the company’s Clea AI and IoT platform, which provides easy-to-use APIs for optimizing AI models, then updating endpoints in the field through a single pane of glass.

From Vision to Reality

These were just a few of the ways Intel partners enable development, deployment, and adoption of AI and computer vision-based technology at the intelligent edge, which will soon be the rule rather than the exception.

Embedded hardware building blocks based on Intel Arc GPUs, 13th Gen Intel Core processors, the Atom x6000E Series, and other Intel technologies are available from multiple distributors, including Rutronik, EBV Elektronik, and Arrow. Many of these distributors also offer design and manufacturing services to help bring your design ideas to life as quickly as you can envision them.

Learn more about what’s possible on the embedded world digital event platform, where many of the conference proceedings are archived, or by discovering all the next-generation vision and AI solutions available now from Intel partners.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Digital Transformation Demands Intelligence at the Edge

Acceleration of digital transformation is driving the move of more and more intelligence to the edge—closer to where data is generated. In a conversation with Muneyb Minhazuddin, CMO for Intel’s Network and Edge Group (NEX), we learn about Intel’s “One Edge” strategy, how it is playing out in different industries, and what it means as businesses accelerate their digital transformation initiatives.

How did the pandemic change businesses’ approach to digital transformation?

We’ve been talking about IoT and the physical-to-digital transformation as big priorities for years. But it was generally a science experiment. People were more focused on just keeping the lights on. Retail, manufacturing, and other industries already had plans to go in this direction, but it was a nice-to-have. The pandemic forced it into a must-have.

What happened through the pandemic—if retailers, for example, didn’t put in curbside pickup during the first four or six weeks of pandemic, they literally had to shut down. When it became a necessity for the financial viability of their business, they had to implement it.

Now looking at manufacturing, a Bain report projected that 43% of pandemic-related job losses happened in that segment. People were not coming into factories, and there was no one to operate the machines. By necessity, manufacturers started accelerating automation, looking at computer vision technologies for fault detection, quality inspections, or predictive-maintenance use cases.

The pandemic accelerated the shift to more digital and decentralized operations that provide faster insights. It was an imperative for organizational survival. And this new normal around operating and engagement created the urgency for edge compute.

Let’s talk about Intel’s One Edge Strategy and drivers of intelligence at the edge.

The One Edge strategy—and the coming together of the Intel NEX portfolio—is driven by the digital transformation that is happening in all types of industry segments, such as retail and manufacturing, as I mentioned—supply chain, healthcare, and others. Let’s look at a few examples.

First, retail stores started rolling out self-service checkouts to support social distance guidelines, but observed that they were seeing some losses from fraud. For example, one retailer with over 2,500 stores was losing a billion dollars from the front of the store every year. Now the retailer is deploying camera-based fraud detection technologies at the checkouts with the goal to return healthy margins back to the store.

In this example, the edge AI and computer vision can detect when someone is swapping and scanning a $25 item with a sticker from a $5 item. Or when their shopping basket has ten items, the camera is showing only eight items were scanned. In these cases, the information of that scan needs to be processed locally and intelligently in a timely manner. Loss is prevented when a retailer can stop that individual on the store premises with local security or store management.

Now let’s look at a manufacturing example. Recently, I was on the factory floor of an automobile giant. They needed automation timeliness for the robotic arm in a body shop, which was simply adjusting and putting a screw together. But if they get that wrong, instead of a screw it could be a scratch on the body, which means that whole door is a waste. And the cost of several such damaged doors before it is detected will be hundreds of thousands of dollars. An intelligent edge solution that’s doing quality inspection through computer vision can prevent such incidents.

You can see the need for operational efficiencies is about quality and timeliness. If this intelligence is sitting in the cloud and it takes minutes to understand the data, it’s too late. The business drivers are fundamentally cost savings, automation, and efficiency.

“What you find is that #AI or #MachineLearning inferencing is the major #EdgeWorkload. As more and more devices get connected and create more #data, you need that intelligence at the edge.” – Muneyb Minhazuddin, @intel via @insightdottech

Can you go a little deeper on these customers and use case examples?

What you find is that AI or machine learning inferencing is the major edge workload. As more and more devices get connected and create more data, you need that intelligence at the edge. You don’t want to send all that data back into the cloud, get it processed, and then take an action. Why is that intelligence so important at the edge, and why can’t it be tapped from the cloud? The amount of data, the latency, and the timeliness are all going to impact the efficacy of the outcome.

Playing along with that same example of self-service checkout fraud detection: when an offender is swapping stickers or scanning fewer items, it happens in a matter of minutes or seconds.

If you can stop that individual on the store premises with the local security, the retrieval of that product is faster. When store management can close the loop quickly, they can intercept the perpetrator. Because the moment an individual walks out, it’s too late. All the intelligence needs to happen at the edge—at the scanner itself—with speed and low latency.

I’ve seen the same in manufacturing as I talked about, and other industrial environments. I heard this from a chemical plant, which was mind-blowing for me because of the large amount of chemicals that get processed. And if there is one bad ingredient, they throw away tens of millions of dollars in paint or chemicals. That’s bad for the earth and the planet and everything else.

You can see the need for the operational efficiencies is really about quality and timeliness. But it’s not possible with the latency incurred in sending massive video files to the cloud for analytics. If this intelligence is sitting in the cloud and it takes minutes to go and come back, it’s too late.

In the store you didn’t stop any theft when you were supposed to stop it. On the manufacturing floor you’re throwing away tens of millions of dollars of product due to bad quality. The intelligence needs to be inferred then and there, at the point where the data is created.

How does Intel help go from cloud-centric to edge AI and computer vision-based apps?

The Intel One Edge Strategy is bringing compute and network storage to the edge and providing the intelligence I talked about in the above examples.

This is the pervasive nature of Intel technology everywhere. We have been on the journey across every vertical on both sides of the coin. We’re able to provide the most extensive platforms for the IT side of the house to bring all the applications and services from data center and cloud. This is true on the OT side as well from a silicon and software perspective, with our investment in edge inferencing, and with tools like the Intel® OpenVINO Toolkit.

Can customers do this with other technologies? They absolutely have choice. However, that choice is made up of very siloed compartments. They have a choice on the IT side, a choice on the network side, and they have choice on the OT side. But those are three completely different ecosystems, and they don’t operate on a common architecture.

The outcome I see when people are not taking advantage of Intel architecture and solutions—from our silicon to our software and our intelligence model—is what I call bespoke solutions, which are not scalable. For our partners we see a huge benefit in investing in Intel-based architecture and open, general-purpose compute.

In parallel with these technology advancements there’s a real urgency for the delivery of 5G, which is key to driving low latency. We are seeing Telco Service Providers bundling Intel IoT applications, services, and solutions over their networks, making this possible.

Any other thoughts you would like to share?

Technology keeps evolving. There was a big generational jump going from mainframe to client-server architecture that happened 30, 40 years ago. The next jump was to public cloud. And every time we do this, we streamline IT. This is the next generation—of trying to bring what we’ve done in the client server and cloud to the edge. And that has been happening on its own, but not quite at the rate and pace of streamlining that’s happened in the data center and cloud. The drive to more intelligence at the edge and what we’re doing with Intel’s One Edge strategy is how we see it coming together, and what will make this next technological evolution successful.

 

Edited by Christina Cardoza, Associate Editorial Director for insight.tech.

AI-Powered Supply Chain Logistics: With Siena Analytics

Are you ready to take your supply chain to the next level? In today’s rapidly evolving global marketplace, keeping up with customer demands can be a daunting task. But what if you had access to real-time information about the location, status, and condition of your products in transit? What if you could make data-driven decisions to optimize your supply chain, reduce costs, and increase efficiency?

In this episode, we dive into the latest IoT and smart tracking technologies behind supply chain logistics. But with new technologies also come new challenges, such as data security and integration. We also cover the best practices for overcoming these hurdles and how you can ensure your AI-powered supply chain is optimized for success.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guest: Siena Analytics

Our guest this episode is John Dwinell, Founder and CEO of Siena Analytics, a supply chain AI and image recognition solution provider. John has spent the last decade concentrating on logistics automation and helping companies create value through analytics solutions. Prior to founding Siena Analytics, John was Vice President of Emerging Technology at another software business company focused around the field of industrial analytics, which was only newly emerging at that time.

Podcast Topics

John answers our questions about:

  • (1:44) Current challenges and trends facing the supply chain
  • (3:16) IoT technologies for smart supply chain logistics
  • (4:49) Why AI-powered supply chains matter
  • (6:17) How to implement IoT and smart tracking technologies
  • (8:33) Ensuring the security and privacy of customer data
  • (10:11) The role of Siena Analytics in this space
  • (12:18) Dealing with changes in supply and demand
  • (15:11) Where smart supply chain logistics are going next

Related Content

To learn more about AI-powered supply chain logistics, read AI Unlocks Supply Chain Logistics. For the latest innovations from Siena Analytics, follow them on LinkedIn.

Transcript

Christina Cardoza: Hello, and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re going to be talking about smart logistics and tracking with John Dwinell from Siena Analytics. Hey, John, thanks for joining us.

John Dwinell: Oh, Christina, thanks for having me.

Christina Cardoza: Before we jump into the conversation, why don’t you tell us a little bit more about yourself and what you do at Siena Analytics in the company.

John Dwinell: Sure. So, I founded Siena Analytics back around 2013, and it was really at that time we saw that the tools for IoT had really come together, and we knew in logistics there was just so much data, so many images that were just being thrown away. And it was a great opportunity to bring that technology, to bring IoT into supply chain. And that was really—that’s the beginning of Siena, and it’s brought us all the way up to today, and a lot of improvements.

Christina Cardoza: Yeah, absolutely. And of course we all know how important data is to the success of your business today, and operations especially being able to get that in real time and be able to analyze and get rid of all of the false alarms and things like that. But I want to start off our conversation today—we’ve heard a lot about the supply chain in the news, a lot of challenges over the last couple of years. So, I want to sort of just set the stage: what the state of the supply chain is today, and what are the current trends or challenges that we are still facing?

John Dwinell: Sure. So, we started out with all of this IoT, and capturing data and images, and along the way the capability to bring AI and AI vision into this IoT solution has really helped transform visibility in the supply chain. And today this is a tremendous issue, right? So, supply chain organizations are under so much pressure to get higher throughputs, better efficiency, and to be able to scale as, certainly, e-commerce has grown. And so the tools and the visibility have really been critical to understanding where the bottlenecks are, and how to improve that so they can really realize greater performance and precision, better quality. So, quality and visibility are really big pressure points in supply chain today.

Christina Cardoza: Yeah, absolutely. And that visibility relates to business benefits across the board—not only improving inventory management, but even reducing transportation costs from trucks coming into the warehouse and picking up and delivering some of these products, and increasing inside the factory itself, increasing the operational efficiency.

So I want to talk a little bit about these IoT opportunities that you guys saw when you started Siena Analytics, as well as some of the recent advancements in the technology that are really helping you guys gain these business benefits across the board and start addressing some of these supply chain challenges.

John Dwinell: Yeah, so IoT has really flipped the problem on its head in a lot of ways. So, traditionally there’s enterprise data saying—okay, for example, this is the size of a case and so this many cases is what’s going to fill a trailer. And IoT is looking at the cases and saying—well, actually this is the size of the case. It’s real data flowing up. And that real data in real time allows you to make the correct adjustments over time so that you can allocate resources correctly. And there’s a lot of benefit and sustainability there, but obviously getting those numbers exactly right allows you to plan your supply chain more efficiently.

Christina Cardoza: So in order to get the accuracy of, say, the case that you’re talking about—the size—this is the real case. Or to really analyze this fast and in real time and get the business, the information and the measurements that matter when it matters. Are you guys utilizing any type of artificial intelligence or machine learning to make this happen?

John Dwinell: Yeah, that’s a big, big factor here. The volumes are very high, and the speeds that the volumes flow are also very high. We today are looking at over 50 million cases every day. That’s just a tremendous amount of effort. And that’s where AI helps change the formula for this really completely, because we can literally look at all six sides of every case flowing into and out of a warehouse and see what kind of condition it’s in, how it’s packaged, how it’s labeled, what’s there and what’s not there, and how does that meet the standards. How does that meet the supplier requirements? And doing that at scale in real time has just not been possible in the past. And so AI and the platforms that we work on have really made this possible.

Christina Cardoza: So, I know that the logistics and tracking space—they’re no stranger to technology and leveraging technology to get things out the door. But sometimes when you have these Internet of Thing technologies, or these more advanced technologies like artificial intelligence, it makes things a little bit more complex, and it’s not always—everyone wants to use it, but they don’t always know how or what success looks like. So what would you say are some best practices to implementing some of these technologies or measuring success along the way?

John Dwinell: It’s true, there’s a certain intimidation factor with AI; it’s new technology. If I only go back a few years, it was a kind of dark art. You really—you needed a real specialist that there were very few of in the world. And there’s been a lot of advancements there.

We have a very friendly, no-code environment that takes away the mystique of the training. We’ve simplified that so we can capture the images, label that data, train new models using the platform, and engage the customer’s domain experts to help with that themselves, and really see these models come together, which is very exciting. And train them to recognize that what’s really critical is small variations from one customer to another—exactly what they need to see. So, the AI model is very adaptable to that. But you need the platform; you need the tools to make that approachable.

Christina Cardoza: Yeah, I love sort of having that no-code capability because I know a big part of just being able to utilize AI is that it used to be limited to experts, data scientists, developers—that not every organization has the in-house skills to do that. So, when you allow this to be used by business users or even domain users, they’re the ones that really understand the problem. So they’re the ones that can really make those actionable decisions or really make those changes and see if things are working well.

But I want to talk about another little issue that people have had with AI, and obviously it’s not little by any means, but just while we’re talking about AI in the past—how it’s been perceived, especially when you’re using all of this tracking and logistics data and you have personal information about customers—where they live or what they’re ordering, things like that—there are security and privacy concerns to that application. So, how can businesses in this space ensure the privacy and the security of their customers, of their data, of their company, especially as cyberthreats continue to grow?

John Dwinell: Yeah, I think security is really important, especially in Internet of Things. You’re capturing data in real time right there at the edge, but it needs to be brought to the enterprise, sometimes to the cloud, and those connections from edge to cloud or edge to enterprise—they need to be secure. So we work very closely with the information-security teams. We leverage the technology and the platform Red Hat and Intel to be sure that we have a very secure environment.

And security—it’s critical issue. These buildings need to still work efficiently, so there can’t be cyberthreats that are going to threaten that. So the platforms are all very approved and, to your point, they really need to be secure, and that’s a constant battle these days.

Christina Cardoza: Yeah, absolutely. And I love how you’re working with other partners in the industry, and that it’s something that you’re continuously on top of or continuously tracking. Because security and cyberthreats change every day and you can never be 100% secure, but it’s important to make sure that you’re secure as possible. You’re standing up to all the trends, updates, patches, things like that.

I want to get into a little bit of how Siena Analytics helps. What are the products that you guys have that are actually making all of this happen, like the no-code capabilities you were talking about. And also if you have any customer use cases or examples that you can share with us.

John Dwinell: Yeah, so one thing I want to make sure I point out: we talk a lot about the tools, and an exciting thing for Siena was we’re now part of the Peak Technologies family. And Peak has really broad experience in the supply chain, and so they really understand the customer’s challenges in supply chain. And we touched on that earlier—that connecting the domain knowledge with the technology is really important. And so for us it’s not just the tools, but the breadth of experience that Peak has that we can bring that to the customer base and help solve their problems.

And just, for example, some of the most common challenges in supply chain are the vendor compliance. So, that incoming quality of product. And this is where having a real deep understanding of the supply chain is important, and also having the visibility to identify at scale what packages are compliant and why, and what packages are not compliant and what’s wrong, and be able to provide that feedback to the suppliers so they can make improvements. And that sort of collaboration is critical to improvements in the supply chain today.

Christina Cardoza: So, one thing I’m curious about is depending on what type of business you are or what industry you’re in, the supply and the demand of things fluctuate. So, especially around the holidays, things—the demand for things, the demand for tracking or for delivery—just gets increasingly bigger. And so I’m wondering how these tools and how Siena Analytics helped deal with these changes and is able to scale or be flexible as a business or organization needs it.

John Dwinell: Yeah, this is where I—again, IoT is extremely helpful, because the supply chains, I mean, it’s remarkable. We’ve seen this over the past several years with Covid, how resilient they have been. There’ve been a lot of challenges, but in a really extremely difficult, unforeseen environment.

But as the scales—really the accuracy and precision of the information is critical to be able to make those adjustments at a reasonable cost. So IoT is feeding back very precise information about the good and the bad as product comes in so that planning can be more accurate. And that’s critical to being able to quickly adjust to changes in volumes in the supply chain and still be able to have the capacity and the throughput to move those through.

Christina Cardoza: That’s great. And so one thing I want to go back on real quick is that you mentioned you’re working with Intel and Red Hat for some of the security things, and I should mention the IoT Chat and insight.tech as a whole, we are sponsored by Intel. But I’m curious because this has been an ongoing trend that I’m seeing, is that no company can really do this alone. You really have to work with partners in the industry to make this happen and leverage some of the expertise of others to really make your expertise grow and shine. So I’m wondering, what’s the value of working with partners like Intel and Red Hat, or other partners that you’ve worked with to make this all possible?

John Dwinell: Yeah. Our partners are extremely important to us, and we really have a broad range of partners who’ve helped us with this journey. I think IoT, as exciting as it is, it’s still evolving. So getting the right solutions, the right technology pulled together, we work very closely with Intel, we work very closely with Red Hat. We work closely with other partners like Lenovo on the hardware. And Splunk is an important partner for us in terms of analytics. Many different partners play a key role in having the right solution. And we’ve been able to watch the technology as it evolves, but be a part of these conversations and help guide the technology that’s needed. And I can’t thank our partners enough. They’re really critical to making this all work.

Christina Cardoza: Yeah, absolutely. And since this is all still evolving, do you have any predictions on where this is going, or what will come next to the supply chain, or how Siena Analytics plans to be part of this ongoing future?

John Dwinell: Yeah, I’ve been in this for a long time, and I see this as the very beginning, oddly enough. AI in supply chain, really intelligent supply chain, is just beginning, and there’s tremendous opportunities for growth. Really, edge-to-cloud is something else that’s also really—it’s bursting onto the scene. But I think it has still tremendous opportunity to continue to grow.

That real-time information is—any sophisticated supply chain organization needs that real-time visibility. And I think that will continue to grow. I think we see a lot happening in standards and collaboration. So companies work very closely with their suppliers and a vast array of suppliers. So those standards are really critical to making the whole supply chain work together and work efficiently.

Christina Cardoza: Yeah, I’ve heard that a lot. We’re just at the beginning of some of these advancements and evolutions that are happening in all these spaces, and that’s really exciting because I never saw this technology being used in this way, especially in these areas making such big improvements. So I can’t wait to see where else all of this goes. But unfortunately we are nearing the end of our time here together today. But, before we go, just want to see if there’s any final key thoughts or takeaways you want to leave our listeners with.

John Dwinell: I say, be open to the technology. It’s moving quickly. It’s really very exciting, but it can bring a lot of efficiencies. Find partners who understand supply chain and understand the technology—that’s really critical. Someone who can work closely with you on this journey and help bring in the best solution so you can have the most intelligent supply chain possible.

Christina Cardoza: Great. Well, with that, John, I just want to thank you again for joining us today.

John Dwinell: Thank you, Christina. I really appreciate you having me on your podcast.

Christina Cardoza: Absolutely. It’s been such an insightful conversation, and I invite all our listeners to check in on the Siena Analytics website, see what else they’re doing, see how else their solutions continue to evolve—as well as insight.tech as we continue to cover this space. And so thank you to our listeners for tuning in. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Smarter Seas: How Cargo Ships Save Big with Edge Computing

Cargo ship operators are no different from the rest of us vehicle owners. They want better fuel economy, lower insurance costs, and to limit wear and tear on their vessels.

But, unlike many of us, they don’t have the luxury of staying home when the weather is bad or seas are rough. They are in the business of delivering valuable freight, and the freight must be delivered on time—often even at the peril of the cargo, ship, and crew.

To balance tight schedules and even tighter margins with the need to protect their vessels, operators now use fleet-management software and edge computers onboard their ships. This fleet-management technology assists with navigation to help captains reduce fuel consumption and greenhouse gas emissions, avoid wave-slamming impacts, and improve vessel lifespan.

The only challenge that remains is finding electronic systems that can communicate with the cloud from the middle of nowhere, and withstand the harsh environment of the high seas.

The Makings of a Ruggedized Embedded PC

You can’t find an embedded PC capable of being deployed on a cargo ship on the shelf at your local electronics retailer. Design criteria for a maritime embedded PC start with the ability to withstand extreme temperatures; high shock and vibration tolerance; and, of course, resistance to corrosion, wind, and water. Without serving any electronic function, the way you address these challenges has major implications for the rest of the system design.

In addition, salty sea air can cause shipboard electronics to corrode and fail, which means they must be encapsulated in airtight packaging. But because these systems must be waterproof, they can’t be built with traditional fans or active heat sinks. So maritime box embedded PC designers must contend with restricted airflow from early in the development lifecycle.

This design requirement, as well as the likelihood the system will encounter extreme heat and cold, eliminates the use of all but the most power-efficient processors, as they tend to have the best thermal-dissipation characteristics.

That’s why one cargo ship operator enlisted the engineers at DFI. DFI offers fanless, extreme ruggedized edge computing with the ECX700-AL, powered by Intel Atom® processors to meet all maritime-environment design needs.

See how one operator was able to improve route planning with a 25% reduction in fuel consumption and #CO2 emissions, thanks to these capabilities of the ECX700-AL. @DFI_Embedded via @insightdottech

The ECX700-AL, used in the cargo ship operator’s fleet-management application, can leverage quad-core Atom® processors that consume between 9.5W-12W, though a dual-core version of the fanless PC is available that draws just 6W. This helps prevent the platform from overheating, even in operating environments of up to 70ºC.

The Atom SoCs are wrapped in the ECX700-AL’s IP67- and IP69K-rated enclosure, which seals the system against dust, immersion in water, and high-pressure water jets. The system is also equipped with waterproof connectors and a smart vent that expels any water that breaches the extruded metal housing (Figure 1).

Image of DFI’s ECX700-AL fanless, extreme ruggedized edge computing device.
Figure 1. The ECX700-AL is equipped with IP69K-rated waterproof connectors and a smart vent that expels any water that penetrates the system’s external housing. (Source: DFI Inc.)

The whole package has been shock and vibration tested to MIL-STD-810G, demonstrating that it can withstand the harsh wave impacts often experienced at sea.

An Extreme Ruggedized Edge Computer in Action

To deliver even more functions, the cargo ship operator integrates ECX700-AL with its AI software, Intelligent Marine System. It uses a sensor unit placed outside the ship to track inventory in the vessel’s shipping containers, monitor weather conditions, track engine conditions, and manage the amount and quality of air delivered into the ship’s engines to ensure optimal speed and fuel economy. All this information is sent back to the captain so they can make informed decisions.

The system can also plug into a ship’s control network, and sends and receives data over a Controller Areas Network. But the platform contains plenty of other I/O and connectivity options as well, including a CAN Bus for connecting sensor units; an Ethernet output and HDMI port for interfacing with multifunction displays; dual Wi-Fi connectors for accessing local wireless networks that also host devices like inventory tags; and a SIM slot and two 4G/LTE cellular antenna connectors for relaying onboard data to and from the cloud.

In the cargo ship operator example, the operator was able to improve route planning with a 25% reduction in fuel consumption and CO2 emissions, thanks to these capabilities of the ECX700-AL. With better navigational insights backed by the intelligence of the cloud, the company was also able to reduce wave-slamming impacts by 70%. And, finally, these benefits resulted in a corresponding reduction in insurance premiums, which dropped 20%.

Life on the Smarter Seas with Maritime Technologies

Independent studies suggest that anywhere from 65%-80% of accidents at sea are the direct result of human error—a statistic that could most certainly be lowered with increased levels of automation. With a beachhead established on cargo ships thanks to edge computing platforms like the ECX700-AL, opportunities to further optimize these journeys with pioneering embedded technology aren’t far behind.

With integrated Intel® HD Graphics as well as video encode and decode blocks, current and next-generation Intel Atom processors pack more than enough performance for the job. And it’s a good thing they do, because with such rigid design requirements for maritime systems, there aren’t a lot of options.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Autonomous Vehicle Technology Drives AI Delivery Robots

Imagine getting robotic deliveries of hot food, groceries, clothing and even your next iPhone right to your doorstep. As a consumer, it’s the height of convenience. For retailers, it solves the last-mile challenge.

One delivery robotic solution is already a reality at eight locations across Europe and the Middle East. Shoppers in Istanbul, Madrid, Dubai, and London can have their purchases delivered by an autonomous, stylish robot resembling a cooler on wheels.

Developed by DELIVERS.AI, an autonomous mobility platform company based in the UK, Autonomous Delivery Robots are eco-friendly, zero-emissions vehicles. The technology that propels the robot opens a world of possibilities beyond making shopping convenient or solving retailer’s challenges in finding and paying enough delivery people.

The idea for the robot surfaced during Covid lockdowns. “We observed that online orders boomed in terms of food, grocery, e-commerce and parcel deliveries because while we were at the home, we were ordering through the online channels,” says DELIVERS.AI Founder and CEO Ali Yarali. “That’s why we came up with the idea to create an autonomous mobility platform.”

The DELIVERS.AI sidewalk technology is the first product and service for the company. But in the future, it may include autonomous vans, autonomous trucks, autonomous drones, in the platform as well.

DELIVERS.AI technology is device-agnostic. This means the artificial intelligence, 3D mapping, and smart cameras that guide the stylish robots through sidewalks and pedestrian crossings—at safe pedestrian speeds—can be used in other vehicles, including scooters for people with mobility issues.

The Future of Sustainable Delivery Robots is Today

Today, DELIVERS.AI robots have been seen by more than 10 million people, in places such as Zaragoza, Spain; Brussels; and Gdansk, Poland. To get a robotic delivery, customers simply choose it on the ordering app.

For some shoppers, it’s a curiosity. People have been known to pose with the robot and post pictures on social media, a form of advertising for retailers. “During Covid lockdowns, shoppers liked the robot as a hygienic, contactless option,” Yarali says.

It first went into service at the Istanbul Technical University in Turkey, homebase of the DELIVERS.AI technical team. There, it conveyed high-value items such as smartphones and laptops around campus. Then it started delivering restaurant takeout as well (Video 1).

Video 1. DELIVERS.AI Autonomous Delivery Robot in action. (Source: DELIVERS.AI)

Using a campus as a testing ground brought the advantage of a controlled environment where designers and developers could observe the vehicle in action to make improvements. The team had to satisfy a number or requirements. For one thing, the robot had to be environmentally friendly. It runs on electric batteries, generating no direct carbon emissions. And the vehicle’s outer shell is made of recyclable plastics.

To maximize cargo, designers and engineers kept the vehicle electronics as confined as possible. The robot has enough diameter for a large pizza box, says Yarali. It can carry 40 kilograms (88 pounds) in its 90-liter (24 U.S. gallons) payload.

DELIVERS.AI developed its own 3D mapping technology so it wouldn’t have to depend on third-party suppliers for the robot’s specific needs. “Robot navigation and path planning are not new problems. However, solving these problems in a dynamic urban environment, with densely populated public areas and where autonomous robots must navigate through sidewalks, is quite challenging,” says Yarali. “But we solved the problem, and we are improving day by day”

Built on a chassis of aluminum compound, the vehicle navigates different terrains in dry, wet and snowy conditions with its 12-inch wheels. When developing vehicle versions for clients, the engineering team takes into account factors such sidewalk width and height—and potential obstacles. Sensors and 360-degree intelligent cameras help get around crowds. The vehicle also can ask people to let it through, greet customers, and ask for assistance if it falls.

Weather was another consideration. In hot climates such as Dubai and Zaragoza, the robot has a special cooling system to help preserve its contents and prevent overheating of sensors and electronics. A tele-assistance team can remotely take over the robot if it needs assistance crossing the street or something else. Plus, field technicians are available to change the robot’s batteries or provide on-site assistance.

#Retailers are looking for new ways to solve their last-mile delivery problem. #Robotic deliveries provide an efficient, #sustainable, affordable approach. @AiDelivers via @insightdottech

Retail Last Mile Delivery and Beyond

Whether they are a supermarket offering home delivery, a click-and-collect dark store, or a local shop trying to compete with large online outlets, retailers are looking for new ways to solve their last-mile delivery problem. Robotic deliveries provide an efficient, sustainable, affordable approach.

“No big capital outlay is required,” says Yarali. “Clients pay for use of the robot with monthly fees through a mobility-as-a-service model or per delivery. “These kinds of models make their financials much better and more attractive.”

As the technology matures, Yarali envisions plenty of applications for the autonomous delivery platform. It could be used for delivery vehicles of all types, including drones. It could power e-scooters, both for personal deliveries and to move people with disabilities. Currently, DELIVERS.AI is working with a U.S. auto manufacturer on autonomous minivans.

Intel technology is fundamental to the vehicle’s operation. The robot uses Intel® processors, three Intel® RealSense D455 depth cameras, and five fisheye cameras. “We are really happy to be a member of the Intel community. They enable us to connect with potential collaborators and provides the support we need.”

Intel support will remain key as DELIVERS.AI finds new paths for its vehicle technology. Along the way, the company is providing value, he says.

Despite general fears that robots may replace humans, Yarali says that is not his intent. “We are not replacing human delivery jobs. We are creating new jobs—skilled jobs like tele operators, tele assistant people, and on-field personnel.” If anything, he says, DELIVERS.AI is adding value to the economy by helping create a whole new category of jobs as life becomes more digital.

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Powering the Future: Grid Modernization Efforts in Action

It’s not breaking news to say that there’s an extraordinary demand for electricity in today’s world, as well as the need for that electricity to be reliable, affordable, and—increasingly—sustainable. What many of us don’t spend much time thinking about, though, is how much work goes on at the back end to make this possible. And, beyond that, how energy requirements can even be a driving force for innovation in the power grid.

Our panel of experts from Intel, Dell Technologies, ABB, and VMware gets into the nuts and bolts of grid modernization. The panel includes: Prithpal Khajuria, Director of Energy and Sustainability at Intel; Anthony Sivesind, Edge Solution Architect at cloud computing company VMware; Jani Valtari, Technology Center Manager at the industrial digitization leader ABB; and Russell Boyer, Global Energy Field Director at technology company Dell Technologies. And they’re all deeply committed to servicing not only the power requirements of today but also the requirements of tomorrow.

What is the current state of the power grid and its recent evolutions?

Prithpal Khajuria: The grid architecture has not changed in a hundred years, but in the last decade we have started shifting towards renewables, where the most important thing is the penetration of renewables at the edge of the grid—in homes, businesses, parking lots. There we have started deploying large-scale renewable energy, mostly solar, and that has started pushing energy back to the grid.

The grid was designed as a one-way highway of electrons moving from utilities to homes and businesses. But this addition of renewables at the edge of the grid has started a two-way flow of electrons. A system was designed to operate one way, but now we have to adapt it to the new scenario. That requires us to rethink the architecture of the grid—how can we add more intelligence technology into it to get better visibility and faster decision-making capabilities going forward?

Anthony Sivesind: I’ll add onto what Prithpal just said. I agree—what once was a one-way power flow is now seeing a great shift; what once wasn’t a problem for utilities now is. And along with the power flow we’re seeing an increase in the penetration of point loads, an increase in the density of loads. Two examples of where we’re seeing that is with data centers and electric vehicles.

That is a challenge for utilities, along with an increase in extreme weather events and physical cyberattacks, all while maintaining this aging grid infrastructure. What VMware wants to do is to help implement a flexible platform so those utilities can improve their capabilities.

Russell Boyer: What the utilities have to do is figure out how to take these various challenges—like weather events and cyberattacks—and add more intelligence, add more operational capabilities to turn that data into insight, and ultimately to improve the reliability and the resiliency of the grid.

Jani Valtari: It’s a tricky challenge. We need to increase the amount of renewable energy; we need to decarbonize the energy sector. At the same time, a bigger part of the society is going to require electrical energy. So we need to be at the same time very flexible, very adaptable to renewable generation, but also more secure than before. And the way to do that is to add more digital technologies. And to do that in an affordable way, we need standardized platforms—scalable solutions that can be widely deployed to many different locations across the globe.

Where are the biggest opportunities for grid modernization?

Prithpal Khajuria: What we have been doing historically is building a model-driven grid, and building it from the top down. But now we need to go bottom up, by building intelligent, data-driven systems at the edge of the grid—which in this case is the substation. So how do we build the intelligent edge and then use it to collect more data, normalize that data, and then extract more intelligence for greater visibility and faster decision-making?

We can address these challenges, and those of meeting ESG goals, by maximizing the use of renewable energy. And the only way we can maximize the utilization of renewable energy is by having greater visibility and insights. That’s what Intel sees—building a data-driven grid going forward.

“Now we need to go bottom up, by building intelligent, #data-driven systems at the #edge of the grid” – Prithpal Khajuria, @intel via @insightdottech 

How do you see emerging technologies being used to meet the needs of today?

Russell Boyer: Dell technology has been investing in edge and IoT for several years now, in order to harden our overall compute infrastructure and be able to offer more capabilities out at the edge. So in order to support all of this automation and real-time operational decision-making, we need more capabilities, more compute, out at the edge in the substation. And that’s just to be able to meet the requirements of today.

If you look at sustainability targets, we’re going to have to have a landing place for the AI models of the future. Today we’ve got aging infrastructure in the substation, and we really need to modernize that, and modernize it at scale, so that we can not only meet the current requirements but also those of the future.

In one example, as we start having more virtual power plants, where there’s a significant amount of generation on the distribution side, we’re going to need to improve those operational technologies to better manage that, and to achieve those ESG targets that Prithpal mentioned to make sure we favor those sustainable sources of energy.

Jani Valtari: The traditional way of handling protection control in a substation has been to use devices that you install once, and then you let them run for 10, 15 years and don’t need to touch them. Now we actually need to change the environment on a very frequent scale.

We also need to make our designs more data driven, not just so that we can collect data and get some insights but so we can react fast based on data, even in the millisecond scale. You can run things on a virtual platform and really quickly adapt whenever there’s a need to make a change in the network.

How is Intel tackling grid modernization?

Prithpal Khajuria: Intel is looking at grid modernization from multiple angles. One angle is talking to the end customers—in this case, the utilities—first. What are the challenges they are facing? How can technology help them? One of the biggest challenges we see, which Jani touched on, is the penetration of these fixed-feature function devices; they were designed to do one thing and only one thing. So Intel put together a team to build the next-generation infrastructure to standardize the hardware, and to disconnect the software from the hardware.

Intel provides the core technology, the ingredients, which is our silicon and the associated technologies around it. Then Dell comes with its technologies; its capabilities layer on the top. Then VMware comes with its software-defined infrastructure on the top of that, and then ABB comes with the power-centric technologies on the top of that. That is what the Intel vision is—bringing the whole ecosystem together to build this scalable infrastructure that can accelerate the adoption of technologies in the utility sector to drive the goals that each utility or each country in the world has for maximizing renewables and minimizing fossil fuels.

What is the value of partnerships and coalitions for grid modernization?

Jani Valtari: We’ve been looking towards a software-oriented approach already for two decades—trying to really shift things from hardware-centric to software-centric, and going from model-based towards data-based, from fixed systems to very volatile and fast-changing but still super-reliable systems.

Recently we released the world’s first virtualized protection and control system. But we cannot do this whole thing alone, so it’s been very good to have solid collaboration. For example, we need super-reliable hardware to run the algorithms, so there’s hardware development with Intel and Dell. Also, we are not experts on the virtualization environments, and the collaboration with Anthony and VMware has also been important for us.

Russell Boyer: We’ve got to create a coalition of the willing in order to innovate. Intel has done a great job of bringing together a coalition of various software and hardware vendors, together with clients, to really put together a standard—we’ve got to influence the standards.

The other thing is we’ve got to have the collaboration with all different types of partners. As we move forward, we want to make sure that we have a whole portfolio of options to be able to support these modern platforms at the edge.

Anthony Sivesind: And not just with the partners here either but also with the utilities—I want to tip my hat to Intel for engaging all the utilities. Intel has spurred the industry with a couple of coalitions in that realm: E4S in Europe, the vPAC Alliance in America. And that’s a great chance to build those standard specifications that Russell mentioned. 

Tell us more about the importance of those industry standards.

Jani Valtari: In order to go in the direction where a solution is scalable and can be widely used in different places, we need to do everything based on global standards. In the power sector the key standard is IEC 61850. It has standardized items related to hardware; it has standardized items related to software, related to communication, related to many different protocols and aspects. When we put that as our center point, we are in a good position to create solutions that can be very widely used.

Can you expand on the grid modernization ecosystem?

Prithpal Khajuria: The Intel strategy is to make the customer—the utility—part of the journey from day one. Because at the end of the day, the customer has the problems, and they want to buy the solutions for those problems. So we get them engaged, and then we bring in a best-of-the-breed ecosystem with their capabilities in each area. ABB—more than a hundred years of experience in the power industry. Look at VMware—invented virtualization technology. Dell—the lead hardware-solution provider in software components.

And Anthony touched on the fact that we have created two industry alliances focused purely on the power industry: the E4S Alliance, focused on digitalization of secondary substations, where the customers and utilities engage with each other. And the vPAC Alliance, which is focused on virtualization of automation and control in the substations.

So that has been the vision of Intel: Bring everybody together, accelerate the adoption of the technology, and deliver the benefits to the utilities and their customers.

Any final thoughts or key takeaways when it comes to grid modernization?

Jani Valtari: One key message is that technology is ready for very rapid grid modernization, and at ABB we’ll be really happy to engage with our customers on the best way to take them there.

Anthony Sivesind: I’ll echo that: We’re ready now. We have the technology, and VMware is also ready to help utilities in any way that it can to train them and bring their teams together.

Russell Boyer: If we’re going to achieve these ESG targets, we really have to accelerate the deployment of new technology. And Dell is committed to developing the latest technology to make that happen.

Prithpal Khajuria: My message is to the utilities: Let’s put a migration plan together. We can walk you through the journey of a pilot or proof of concept, to a field pilot, to a deployment. That migration plan needs to be stitched together, and Intel and its ecosystem partners are here to help.

Related Content

To learn more about efforts to modernize the grid, read Grid Modernization Powers the Way to a Decarbonized Economy and listen to The Driving Forces Behind Grid Modernization. For the latest innovations from these companies, follow them on LinkedIn at: ABB, Dell Technologies, Intel Internet of Things, and VMware.

 

This article was edited by Erin Noble, copy editor.

Machine Vision Automates Workplace Safety in Manufacturing

Prioritizing workplace safety in manufacturing is top of mind for every manufacturer. But achieving it can be a difficult and costly process. Reducing risk in a factory means constant monitoring to identify environmental safety issues and proper workplace precautions. And this kind of vigilance can be labor-intensive.

“On average, a 10,000-square-meter factory has to have a minimum of 10 safety personnel, with two HSE (Health, Safety, and Environment) officers required for video supervision and monitoring,” says Stephen Li, CEO at Aotu, an AI company providing machine vision-based health and safety solutions.

That represents a significant investment for manufacturers, especially as industrial facilities need round-the-clock supervision. In addition, manual monitoring comes with its own limitations.

“Plant safety personnel work hard—but they’re stretched thin. They can’t detect most health and safety problems immediately, and they’re never going to cover 100% of the scenarios,” says Li. “Plus, there are delays in responding to issues in a timely fashion, since a human being has to actually make a phone call or go to the site of a safety violation in person to observe or correct it.”

It’s a challenging situation for manufacturers, who are committed to worker safety but are also under pressure to tighten budgets and optimize processes. But new machine vision health and safety solutions may provide an answer that keeps factory workers safe and satisfies the demand for greater efficiency.

Machine Vision Automates Safety Monitoring in Bottling Plant

Deployment at a bottling plant in China based on Aotu’s machine vision solution is a case in point.

The plant is operated by a major beverage company. The sheer size of the facility means many different areas to monitor, including rooftops, ceilings, boilers, waste areas, warehousing facilities, and more. Plus, the bustling site is filled with factory workers performing a wide variety of tasks, making supervision of employee behavior a complex undertaking.

In collaboration with Intel, Aotu developed a machine vision-based health and safety solution designed to analyze video feeds from the bottling plant and automatically alert safety personnel when an issue is detected.

“AI can monitor workplace environments in real time, identifying potential hazards and ensuring compliance with safety protocols. This proactive approach to safety can reduce accidents and improve factory workers’ well-being,” says Zhuo Wu, Software Architect at Intel.

The system’s AI algorithms are configured to monitor for environmental safety issues. The deployments cover nearly 1,000 key supervision points within the factories. At the same time, AI also analyzes video feeds for behavior-based safety violations: failure to wear proper protective gear, unsafe climbing and walking, unauthorized access to high-risk areas, violations of maximum occupancy limits, and so on.

If the system detects a problem, it captures a 30-second recording of the safety issue, classifies it as either a major or minor emergency, and sends an alert to a human supervisor for verification and response. If the problem is serious enough, a safety official can remotely trigger an on-site alarm and warning message to alert workers to imminent danger. For less severe issues, safety personnel have the option to follow up later for resolution and worker training.

After implementing the solution, the bottling plant saw an increase in both the efficiency and the efficacy of its safety program. “The use of AI reduced the workload of the plant HSE staff, and it ensured that safety issues were no longer going to be ignored,” says Li. “In addition, safety awareness among front-line workers improved significantly.”

#MachineVision health and safety solutions are gaining traction among large manufacturers—and their adaptability, cost-effectiveness, and ease of deployment should make them attractive to SIs and smaller industrial businesses. Aotu via @insightdottech

Flexible Platform for Video Analytics

For a machine vision solution to be broadly useful to the manufacturing sector, it must be adaptable. A bottling plant, after all, is quite different from an auto parts factory, a high-tech fabrication site, or a chemical facility.

To create a robust yet flexible machine vision platform for industrial health and safety, Aotu decided to partner with Intel. Together, the companies were able to leverage the capabilities of a number of Intel® hardware and software tools:

  • 11th Generation Intel® Core processors offer optimization and acceleration for deep learning, AI, and machine vision scenarios.
  • Intel® Iris® XeGPUs are particularly well-suited to computer vision tasks such as smart video processing.
  • Intel® Xeon® scalable processors enable configurations that require heavier workloads and are also suitable for use in harsher industrial settings due to their ruggedized design and wide operating temperature range.
  • The Intel® OpenVINO toolkit provides pre-trained AI inferencing models and reference models for common industrial safety scenarios—as well as a foundation for the rapid development of custom AI algorithms.

The use of OpenVINO was particularly important when working with AI models for workplace safety in manufacturing scenarios. Acquiring a diverse data set that covers a variety of safety situations often requires extensive efforts—especially when it involves real-world scenarios—and developing these training models can be computationally intensive and time-consuming. Aotu has a set of tools designed to streamline the process of data collection and labeling, and with OpenVINO integration can run optimized pre-trained models, greatly speeding up the data set generation process.

“OpenVINO provides a set of tools and optimizations to enhance the performance of AI models. We use it to reduce the model size and improve inference speed without significant loss in accuracy,” says Li.

Thanks to Intel’s hardware and software capabilities, the company can offer no-code and low-code AI customization and deployment. This enables end users to execute inference tasks across different devices efficiently, maximizing computing power while achieving low latency and high throughput for their solutions.

Towards Safer and More Efficient Industry

Machine vision health and safety solutions are gaining traction among large manufacturers—and their adaptability, cost-effectiveness, and ease of deployment should make them attractive to systems integrators and smaller industrial businesses as well. Especially as more industrial environments start to implement automated solutions such as collaborative robots, industrial AI can be used to ensure AI-driven robots can work alongside humans, reducing sickness and injuries.

But beyond the health and safety benefits, the inherent flexibility of these solutions combined with the power of OpenVINO will open other use cases as well. For instance, the platform can be extended to include defect detection, production line automation, predictive maintenance, and supply chain management.

OpenVINO’s “ability to quickly process and analyze visual data makes it an invaluable tool for enhancing quality control, reducing downtime, and increasing efficiency,” Wu says.

In the future, look for computer vision to further the digital transformation of manufacturing in new and innovative ways, making Industry 4.0 safer, more efficient, and more profitable for all.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

The article was originally published on March 24, 2023.