High-Performance Compute + Azure IoT Edge Advance the IIoT

Is it possible to have too much of a good thing? For manufacturers using AI to gain insight from operational technology (OT) data, the answer is, increasingly, yes. OT assets now generate so much information it can’t be analyzed efficiently by cloud-based AI.

“The growth of Industrial IoT (IIoT) means that manufacturers are collecting a huge amount of raw data from the factory floor. But you can’t simply dump all of that to the cloud for processing,” says Penny Chen, Product Manager at Advantech, a manufacturer of edge computing solutions. “To send everything to the cloud is too expensive due to data transfer and storage costs. It’s also resource intensive, which can cause network performance and latency issues.”

This is a problem—but one with a clear solution: Perform a portion of the AI processing workload at the edge, pretreating and filtering OT data before sending it on to the cloud to extract further business insight.

The good news for manufacturers and OT systems integrators (SIs) is that edge AI appliances purpose-built for this task are now available. These flexible, ready-to-deploy solutions bring core AI functionality from the cloud right to the edge, offering efficiency, cost savings, and ease of implementation in multiple manufacturing scenarios.

Modular Edge AI with Azure IoT

Bringing cloud AI analytics to the edge might seem like an obvious answer, but it’s not without challenges. The key to overcoming those challenges is a modular architecture running on proven industrial hardware.

With the Advantech Intelligent Platform with Azure IoT Edge solution, for example, end users decide which Azure IoT modules are most applicable to their use case, and then deploy them from the cloud to an edge AI appliance. Advantech’s gateway software, EdgeLink, handles the important work of collecting and standardizing data streams from the various proprietary communication protocols used by industrial equipment—a major challenge when performing data processing tasks in industrial environments.

The local Azure IoT runtimes then perform whatever AI inferencing is required at the edge, before sending pretreated information to the cloud for additional processing.

There are a number of benefits to this approach:

  • Performing AI processing at the edge reduces the total amount of data that needs to be sent to the cloud, cutting costs, reducing latency, and conserving network bandwidth.
  • Data pretreatment at the edge means manufacturers can transform OT data into meaningful information, filter out what is unimportant, and select only the most relevant data for additional analysis.
  • Edge computing offers near real-time insight into what is happening on the factory floor, offering safety and operational efficiency benefits.

Chen stresses the importance of modularity in industrial edge AI, and notes that the selection of Azure as an IoT platform was specifically driven by this concern: “With Azure IoT Edge, you have lots of different cloud intelligence modules that you can choose to deploy to the edge. This lets the end user focus on the business insight that they need, and nothing more.”

Chen says that the choice of Intel hardware was also driven by concerns of flexibility: “Our main concern is to optimize for edge AI performance and functionality, and Intel processors excel at this. And Intel’s wide range of processor options also means that we can meet a customer specification for just about any use case or industry.”

AI at the Edge in Multiple Scenarios

A flexible, modular design means that edge AI platforms can be used in a diverse array of industrial settings.

Advantech has already deployed its solution at a global tire manufacturing enterprise as well as with an OT SI serving the transportation industry in Europe. But of course, edge AI is useful in just about any scenario in which equipment monitoring, process optimization, and resource management are a concern—and that describes everything from factory environments and urban construction to energy production and logistics (Video 1).

Video 1. A demo showing how edge AI can be used to optimize a beverage manufacturing process. (Source: Advantech)

Ready-to-deploy edge #AI platforms, built on proven, well-documented #technology, offer the opportunity to sell innovative edge AI-enabled solutions to customers without being hamstrung by technical barriers. @Advantech_USA via @insightdottech

For SIs serving these sectors, the availability of ready-to-deploy edge AI platforms, built on proven, well-documented technology, offers the opportunity to sell innovative edge AI-enabled solutions to customers without being hamstrung by technical barriers.

Solving Lingering IIoT Pain Points

The ability to deploy cloud AI processing logic to the edge is a huge step forward for the digital transformation of industry. But some issues remain: the complexity of data integration when working with OT assets; the significant time and effort required to handle the associated programming tasks; the difficulty of ensuring uptime at sparsely staffed or remote locations.

In response, vendors of edge AI-enabled solutions are attempting to simplify IIoT deployment and management for their SIs and end users.

Advantech, for instance, is developing WISE-Edge365, a no-code SaaS platform that will allow end users to provision devices and monitor data in real time—all from a preconfigured, industry-specific dashboard designed to help with data visualization.

“The goal is to offer an integrated, user-friendly platform that gives users complete connectivity and device management over everything, edge and cloud,” says Chen.

The upshot is as more manufacturing companies continue on the path of digital transformation, they will find themselves in an increasingly mature edge AI-enabled product marketplace, one in which the benefits of edge AI are delivered as seamlessly as IT services are today.

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

High-Impact Spaces Say “Hello!” to the Hybrid Workforce

The way we work is forever changed. At the beginning of the pandemic, most of us started off working from home and got comfortable using videoconferencing software. As quarantines lifted, workers began splitting their time between home and company offices, and the use of these platforms continued. Today, it’s the norm for meetings to comprise a mix of in-person and remote attendees via Zoom or Microsoft Teams.

But as the way we work evolves, the spaces we work in must evolve, too. Hybrid workforces are driving the creation of “high-impact spaces” in office buildings and other public spaces designed to support in-person and remote users simultaneously (Figure 1). Purpose-built to maximize productivity and engagement of distributed groups, high-impact spaces deploy customizable blends of video processing, voice-activated camera switching, echo cancellation, edge computing, connectivity, and other technologies that transform traditional meeting rooms into unifying hybrid work environments.

Illustration of conference room that depicts a blend of hybrid work technologies: A/V A/V equipment, lighting, and edge computing.
Figure 1. High-impact spaces leverage a blend of advanced A/V equipment, lighting, and edge computing in an environment that unifies hybrid workforces. (Source: Q-SYS)

For a high-impact space to deliver seamless meeting experiences for businesses, schools, and other organizations, all these disparate technologies must work in unison. In addition, it must have automation built in to identify who is talking, where they are, and adjust cameras, lighting, audio equipment, and displays based on their location.

All of this can seem like a burden to the IT department. But with the right hardware and software infrastructure, retrofitting a normal space into a high-impact one can be a highly automated experience that doesn’t require specialized skills and creates new opportunities for today’s workforce.

Lowering Barriers to High-Impact Workspaces

The requirement that multiple local and remote systems operate as one with unified communications platforms is what separates high-impact spaces from your typical AV project. Therefore, integrating all the components needed to bring a space to life is the most important step in any hybrid office retrofit.

Aside from the AV equipment you’d expect, this can even include smart building systems that help automate “everything about a room and the experience,” says Patrick Heyn, Vice President of Marketing for Q-SYS Americas, an integrated audio, video and control platform provider.

With the right infrastructure, retrofitting a normal space into a high-impact one can be a highly automated experience that doesn’t require specialized skills and creates new opportunities for today’s #workforce. @QSYS_AVC via @insightdottech

“The tenets of a high-impact space are professional-grade multi-zone audio, video distribution, camera switching, and control. There’s an incredible amount of room automation,” Heyn explains. “The system is taking into account the environmental elements in the room. The room is reacting to people in the room.

“You’re needing to fuse multiple systems together—not only audio, video, and control, but also bringing third-party technologies under a single ecosystem,” he continues. “If you think about putting together a board room, you’re usually looking at a control processor, an audio processor, and a video matrix system. There are at least three or four different processors you need to connect to each other to even get out of the gate.”

Openness in Hybrid Work Environments

Historically, this is where challenges arise for IT departments. Because many AV solutions on the market today are built around specialized DSPs or proprietary ASICs, they often must be programmed with specialized or proprietary languages. Multiply this across all the systems and peripherals needed to create a high-impact space, and the integration and maintenance effort can quickly outweigh any advantages.

Avoiding this outcome means reducing barriers to entry, which means substituting closed, esoteric technologies for open, accessible ones. Q-SYS took this approach by designing its integrated audio, video, and control platform around Intel® Xeon® processor technology, which delivers ample performance for multichannel signal processing, compatibility with a range of IEEE communications protocols, and a broad ecosystem of support.

The company’s off-the-shelf hardware runs on the Q-SYS OS and delivers a flexible software package for managing components in a high-impact space that comprises:

  • Audio, video, and control engines
  • A real-time network packet processor
  • User-controlled interface server that accepts Lua and JavaScript commands via APIs

The result is a standards-based IT architecture that can be managed from the cloud and simplifies the integration of technologies for high-impact spaces, both from Q-SYS and third parties.

“The Q-SYS Platform provides the architecture and processing in addition to a portfolio of native devices like cameras, loudspeakers, amplifiers, and touchscreen controllers. We’re handling all the echo cancellation, and we’re handling all the bridging between the AV system and the computer,” Heyn says. “Those audio video and control pieces are fully integrated. There’s a singular Intel® processor to drive all these things, which means that an end user or programmer doesn’t have to work to get those pieces to work together.

“There’s no learning curve to get those pieces to talk to each other because they’re already designed to talk to each other. That integration piece is huge,” he continues. “Then when you’re bringing those third-party pieces in, we have this growing library of plug-ins and technology partners that make integration with Q-SYS easy.”

In other words, widespread compatibility with its Intel-based hardware makes the Q-SYS Platform a blank canvas capable of incorporating HVAC, lighting, door locks, myriad sensors, and virtually any other system in a room or building that can communicate over APIs.

“This option lets us transform the platform into whatever the user needs based on the software that’s running,” adds Heyn.

A Doorway to Higher Impact

It is expected that the high-impact space market to reach billions of dollars as they extend beyond office spaces to classrooms and other public venues. As hybrid workforces become the norm, these transitions are already taking place.

As organizations prepare for a cyber-physical future, many questions must be answered. “Can we consolidate our space since fewer people are in the office?” and “Should we arrange our space to accommodate groups instead of individuals?” and “Will we be able to get the most from our remote employees?” are some of the basic challenges that operations officers, facility managers, and IT staff face today, according to Heyn.

With open, standards-based AV infrastructure, the answer to all those questions is the same: Yes, as long as it’s high impact.

 

 This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Achieving Factory Automation: With Siemens

Ask manufacturers if they’re familiar with Industry 4.0 and it’s likely you’ll get a “yes.” But ask if they’re taking full advantage of its benefits, and many of those yesses change to no or I’m not sure.

As the world becomes increasingly digital, it’s critical that factories keep pace with changes by bringing AI to the shop floor and adopting software-defined factory automation.

In this podcast, we examine key trends transforming industry operations—including the increasing use of digital twins and the expanding role of edge devices and AI capabilities. We also look at why it’s vital that factories close the communication gap between OT and IT teams, implement technology standards for standardization, and ensure new manufacturing technologies are as accessible and user-friendly as possible.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guest: Siemens

Our guest this episode is Rainer Brehm, CEO of Factory Automation at Siemens, an industrial manufacturing solution provider. Rainer has been an employee of Siemens since 1999 in various roles and organizations within the company. Today, he is focused on the future of automation and how collaboration can help shape and develop new and reliable products and solutions that are key to its customer priorities.

Podcast Topics

Rainer answers our questions about:

  • (2:38) The ongoing evolution of the factory floor
  • (5:23) Challenges associated with industrial digital transformation
  • (9:03) New complications and opportunities that come with edge and AI
  • (14:29) Examples of how others in the industry handle these changes
  • (20:10) The value of collaboration and partners for factory automation
  • (22:37) How to leverage the latest technology updates

Related Content

To learn more about factory automation, watch 5G Is Here: What Does It Mean for the Factory? For the latest innovations from Siemens, follow them on Twitter at @Siemens and LinkedIn.

Transcript

Christina Cardoza: Hello, and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Associate Editorial Director of insight.tech, and today we’re talking about industrial trends and transformations with Rainer Brehm. But before we get started, let’s get to know our guest a little bit more. Rainer, thanks for joining the podcast.

Rainer Brehm: Oh, first of all, thanks, Christina, that I’m part of the podcast. Yeah, and I’m really excited on the topic.

Christina Cardoza: Yeah, me too. So, let’s get to know you first though. You’re from Siemens. What can you tell us about yourself and the company?

Rainer Brehm: Well, first of all, I started at Siemens in about ‘99, and I was starting in the business I’m responsible now for, it’s the business around factory automation. So I started there in ‘99. Programmed my first PLC. It was a little bit odd because I studied computer science and it was a different language; it was more a language for an electrician, which is very useful, but it was not the IT language I was used to.

I was working there in Princeton at that time, where we have our corporate research department, really going beyond what exists today. And it’s a very interesting business, because, with Siemens, we are by far market leaders, so imagine every third machine or every third line globally is controlled by a Siemens logic controller, which means this is somehow the hidden champion behind the factory door, optimizing not only factories, optimizing a lot of processes. This could be vertical farming, that could be the metro line in New York, that could be the package claim in the airport. So it’s really a quite broad usage of the PLC.

And it’s also one important topic which really helps going forward on aspects of sustainability, because we strongly believe there will be no sustainable future without automation, electrification, and digitalization.

Christina Cardoza: Yeah, absolutely. And I’m looking forward to dig into those topics a little bit further. Wow, since ‘99 at the company. So I’m sure you’ve seen quite a bit of evolution, not just the language that you were talking about, but, like you said, adding all of these new advanced capabilities and new ways to do operations within the factory.

So, to start off the conversation, I’m curious if we can go over how you’ve seen this space evolve, and how it is going to continue to evolve next year—what sort of industry trends can we expect in 2023 and beyond?

Rainer Brehm: You know, the topic of Industry 4.0, I think, is probably well known in that community. And it was starting more than 10 years ago, I think even 11 years ago it started. And they were first ideas, but, due to corona [virus], due to supply chain constraints, that really accelerated significantly. And we see that it’s now really kicking in and it’s getting to be a reality.

So the trends we are seeing that are combining the digital and the real world—kind of digital, which is the simulation module, the digital twins, but then the real operation combining those, it gets more and more reality. So you simulate basically everything up front, and then you implement it. What also becomes now more reality is that you have a feedback loop. So basically when you have a simulation module and you implement it, but then you get the real-time data out of the operations and feed it back to the digital twin, then you can further optimize it.

The leveraging of data is significantly important because with that, now you can feed AI. Because AI isn’t yet really a big thing on the shop floor, but it will become, as all data is going to be more and more available. What we also see is, we call it a software-defined control, software-defined automation: where currently everything is very much bundled and tied with hardware, it’s going to be more decoupled, it’s going to be more virtualized. I think these are trends which we see.

And last but not least, I think, which is very important, especially when we look at the shop floor, the users of those more complex technologies, they are still the persons operating machines. These are no IT experts—the people maintaining the machines in remote locations somewhere in the world—they need to be capable to operate and to maintain those lines, those machine, those infrastructure plants, and therefore the topic of human-centric automation. So, how can we make it as easy as possible? That going to be a very important topic for the future.

Christina Cardoza: Yeah, absolutely. Certainly a lot of changes happening in the factory space right now, especially, like you mentioned, in the last couple of years. And you said in your intro that the future of sustainability really isn’t going to be reached without this factory automation. Also, I think the success of these digital transformations that are ongoing in the Industry 4.0 landscape right now.

So I think we know, like you said, all of these benefits and opportunities to these changes, but it can be difficult actually implementing them and deploying them. What are some of the challenges you’re seeing on the factory floor when it comes to trying to reach these trends and goals you just talked about?

Rainer Brehm: First of all, I think a lot of technologies are there. The topic, why doesn’t—or why didn’t—they scale and start scaling is that OT and IT people they simply speak different languages. I experience that within our organization. I’m more the OT guy, yeah? Even I studied computer science, but we have also very big software business, where we have the PLM software. When I talked about connectivity, we have problems with connectivity, I thought the connectivity to the real world, to the equipment, to the sensors, to their drives and so on. The IT person, when he talked about connectivity, he was thinking about connectivity to databases, to cloud, to data lakes. So it was even that word, “connectivity,” was completely differently interpreted.

And what we experience in our company, in Siemens, when I talk with colleagues, we experience that in our customers as well. So there is still a gap between the IT department—even the factory IT department—and the OT persons who are defining how you’re going to automate something, how you set up the equipment, how you set up the lines, how you maintain it to optimize it.

So there is a big topic on the languages. How do you bring the languages together? This could be terms, but this could be also how you, for example now, program the OT landscape, which I said was very much on the mindset of an electrician—not so much of an IT person. I think that is one main topic, how we can do that.

And, for example, we have now introduced a new programming environment called Automation AX Extension. It’s called Extension because it makes the OT world more accessible to the IT people, number one. Number two, the landscape is very, very heterogeneous. So even though the people don’t speak the same language, a lot of the machines doesn’t speak the same language because they’re also from a different vendor. They don’t have standards. So standardization is still missing, that you really can’t scale, you need somehow a standard.

And that also applies even to new machines, to greenfield. But it applies even more to a brownfield, because a factory normally runs, I means some factories run a minimum of 10 years, most are 20 years, 30 years. If you go to the energy sector, it might—or at chemical sector—it might run 40 years. So, you have a lot of brownfield, and that brownfield doesn’t speak the language which you might need to scale up. So I think these are the topics: how you standardize on your greenfield, on brownfield, in order to scale it up.

Christina Cardoza: Yeah, this idea of the IT/OT convergence is something that we’ve seen on insight.tech becoming more prevalent over the last year. And I’m excited, as we go into 2023, I think it’s just going to become even more important. And I’m excited to see how companies like Siemens are going to try to bridge those two worlds together. Because, like you said, there are things that need to happen now that just weren’t possible, or you couldn’t even think about when you started in the company. So now that we’re here 20-something years later, there’s a lot to think about. And especially, like you mentioned, the AI capabilities. There’s so many new devices and connectivity and just advanced features and technologies that you can utilize on the factory floor, and how do you now match that up with 20 years of technology or legacy infrastructure?

So I’m curious if we could talk a little bit more about that emergence of these edge devices and AI capabilities—how that’s complicating things, but also benefiting and adding new opportunities in the industrial space.

Rainer Brehm: Exactly. So if you look on an edge device from an IT perspective, that’s something a little bit different than maybe you look from an OT perspective. First of all, I already kind of elaborated a little bit on which people are operating it. I mean that was one of the main topics. So, how can you make it easy? And we brought a lot of cybersecurity aspects in. So, you need to have key management because to onboard a device, for example, believe it or not, that’s already a big, big hurdle. In IT, somebody managing keys is normal. On the OT side, probably most people when they buy our PLCs, for example, they probably disable that functionality because they are too complicated. So how you make this, which is normal on a IT side, accessible?

If you look further, there are some necessities, if you take edge computing. When you talk about edge computing on a shop floor, I think it’s very important that you understand that edge computing has some more requirements. So, for example, it should be in a lot of cases real-time capable. And if I talk about real time, maybe we talk about a chitter of microseconds. Because if you imagine a very fast process, in a microsecond a lot of things could happen already. And, if you’re not reacting fast enough then you might question a machine, or you might get to different results. So the topic of real time is very important.

Secondly, if you then want to deploy AI workload, for example, on a shop floor, and you want to react very fast, it’s important that this AI workload has an inference close to the machine. Simply because of the speed of light you shouldn’t put it far away. This is one aspect. The other aspect is also you want the AI to interact frequently with your real process. So basically you—so you’re going to interfere with the process, so you want to have that kind of close allocation, close to the machine or to the line. On the other side, you also want to take data out of the process and feed it back into the AI. So you also have—and these are a huge, huge amount of data which is produced.

I can give you one example. In our factory in Hamburg, we produce about 10 terabytes of data. So you don’t want to send the 10 terabytes of data into a cloud. You rather want to have them executed directly there where the source of this data is. So that is different, maybe, to a classical IT landscape. Furthermore, we have an industrial edge platform, which is DOCA-based. But we need to add, not only real-time capabilities, we need to add also the topic of safety. Because, you know, it’s a little bit like autonomous driving. Safety is a very important aspect. And you could imagine when you want to do autonomous driving in the car industry, you don’t want that the cloud is now defining whether you stop or not if a child is running on the street. You want that being executed as fast as possible directly in the car. So the same is on a machine. If somebody crosses, or a press is going down and somebody has his finger there, it should stop immediately. So you need to have that kind of fast reaction as one of the assets.

And another topic is, why not thinking ahead? Leveraging AI not only for optimizing processes, but also thinking about, couldn’t we use AI for a more autonomous factory? So also there we think, how can we use AI, that a machine, a robot could decide itself what to do? And then it means AI is not only optimizing processes, optimizing, enhancing engineering, but it’s really steering the robot, the machine, and the line. And that application for AI is really, really exciting, because it opens up really new fields for automation.

Because when I started in ‘99, what you automated basically was you automated very repetitive tasks. And mass production was perfect, because mass production, a lot of repetitive tasks. Or you automated something which is predictable. You couldn’t write a program “if-then-else” if you don’t know what is the “if,” and the “then,” and the “else.” So you basically can only automate what you know. If you’re now leveraging AI in automation, you suddenly could automate something which is maybe a “Lot Size One” and which might be unpredictable, so you automate the unknown, which is not possible today. So therefore AI in automation could open completely new application fields.

Christina Cardoza: I definitely agree. And I think we’re only scratching the surface of what’s available or possible out there. There’s going to be new ideas that companies like yourself and the people you work with are going to come up with together. And so you mentioned a couple of different solutions or efforts going on at the company, and I would love to learn a little bit more about how Siemens is working to make this all possible—if you can go over some of the solutions that your guys are using or providing the customers to make it a little easier. Or any use cases or customer examples you could provide us with.

Rainer Brehm: Several ones. I mean, let’s first maybe start with when we apply—because we have our own factories. I mean, so we apply something, what we apply to our customers, we apply it to ourselves here. So, one example of a use case IT/OT leveraging AI is in our plant in Hamburg, where we produce every second product that is going out of the factory, even more in the meanwhile. So it’s a very high throughput, and we have PCB lines, which in the past we—there’s a complex process how you put the components on the circuit board: how are you soldering it, and so on and so on. And at the end we normally did an X-ray in inspection of the PCB. You can’t do it with vision systems, because somehow you need to have soldering points below the chips, so you do X-ray.

In the past, where we had the X-ray machine was a bottleneck. So, leveraging AI, we basically now predict whether this PCB, this individual PCB, has a high quality or not. So everything with a very, very high probability that there is no quality issue, we don’t send to the X-ray machine anymore. We send it then directly, bypassing the X-ray machine, going to the final assembly. With that, we save the X-ray machine, for example. So we’re using data out of the process.

Another topic—optimizing processes—we see currently in the battery industry. You know, this is a big, big investment also in the US with the Inflation [Reduction] Act, a lot of battery manufacturing is produced. Currently it’s hard to scale them up, scale them up on the right quality level of batteries. So, how much material comes in? How many batteries come out? Still that is not a process which is mature enough and optimized. We see—and we’re working with customers—that we need to get the data of the complete processes from the beginning—mixing the slurry—at the end, kind of doing the aging information of the batteries, getting data from the different process steps, looking at those data and optimizing the process end-to-end, which is not done today, but we are working with customers on that.

Another example could be in infrastructure. So, we are using—we are doing tunnel automation. So, if you drive through a tunnel in, I don’t know, in the Alps, or in the Rocky Mountains, or somewhere in the world, there’s a high probability that those tunnels are automated and controlled by our PLCs. What we do, we now using AI more and more in order to detect an emergency situation in the tunnel—if there’s a traffic jam, if there’s fire, whatever. So you need to react fast—how do you evacuate the tunnel? How do you switch on or switch off vents, lights, and so on. So we are using, even in infrastructure now, an AI workload aiming to optimize it.

And maybe, last but not least, going back to the factory again, to automate the unknown. We have an interesting application where we’re doing real-time flexible grasping. So, a robot is not programmed, but an AI tells the robot where to grasp an aspect. So, you can see that on a fulfillment center in the logistic area. So we take something out of a box—we can do that without training the robot on the thing which needs to be picked up. We train the robot on the skill to pick up. So, basically the robot can pick up everything—if the gripper allows it—that is and that needs to be necessary. But, with the skill of grasping, we can automate something, we can grasp something unknown, unpredictable.

And my last use case, which is not reality, but where I invest currently money, because I believe that’s really something interesting, is: can you in the future automate repair? Because if you talk about cycle economy, the one topic is, how do you recycle things? And a more interesting thing is, can you repair in the future something? And we know that currently there’s a lack of people capable to repair. And if you take, for example, a car battery in the future, it consists of cells, so can you maybe in the future take a car, take a cell from a Ford in the United States, you go to a to a workshop, it takes out the batteries. There’s a defect, and a system can automatically detect where is the problem and autonomously repair the battery cell. If you do that, you automate the unknown, because every battery is a unique thing, it has a different lifetime. And can you automate that leveraging AI? So, some of the use cases where I’m really excited, that’s IT/OT convergence leveraging AI, leveraging new technology, really will make a difference in the future.

Christina Cardoza: Yeah, absolutely. A lot of exciting use cases and things to look forward to. I love one of the first ones you talked about, which was actually applying these things to your own factory, because it shows that you guys not only are solving the pain points, but you felt the pain points also, and you have the experience working within your own factory to remove some of these, so that’s great.

And listening to you talk about some of these, they sound like huge undertakings, and I should mention that insight.tech and the IoT Chat, we are produced by Intel®. But I think a lot of these things require collaboration and partnership throughout the ecosystem to make some of these a reality, or to make some of these possible. So I’d love to learn a little bit more about the partnership you have with Intel, and how that’s been valuable to your solution and the company.

Rainer Brehm: First of all, we work with Intel probably, I don’t know, since four decades, way before I started with Siemens. But I know very much that we started in 2012 with the Technology Accelerator Program, the TAP program, where we said, “Hey, if you have an OT workload, the topic, especially on low latency, is a very, very important one.” So we worked very closely with Intel to enable the processes of having a low-latency functionality, especially for those workloads where you need to act in microseconds. So that was very, very fruitful and helped us to use the Intel chips in our controllers, number one. And I think it also—that helped Intel in order to have the processes capable for having more OT, or more real-time, real real-time workload. I mean, that is one important topic.

On the other side, we’re working with Intel currently, I mean it’s really the supply chain crisis, and, also thanks to Intel, I think we were capable to fulfill not all demand of our customers, but thanks to Intel I think we were quite capable to produce as much as possible. And also we’re capable to react fast on changes. And, basically with the digital, our digital product, similar to all the product, we’re also capable, if Intel said, “Hey, that product is not available, but I have a slightly different version of the product available.” We were quite capable of redesigning our product quickly in order to then build in the different product. And also there, thanks to Intel, we had a very, very close collaboration of finding out what fits and what doesn’t fit.

Christina Cardoza: I love that longstanding relationship, that you guys have seen these evolutions throughout the last couple of decades and worked together. And, of course, Intel, every year they’re just releasing new capabilities, new features that are helping you guys solve some of these real-world challenges in use cases you’d mentioned earlier.

So I’m wondering, especially the recent advancements that Intel is making, how the new updates or features being added to Intel® Xeon® processors, for example, how those play a role in Siemens and helping you guys reach some of the goals and trends and transformations we’ve been talking about?

Rainer Brehm: Absolutely. I mean, first of all, on the embedded side, we are now releveraging this kind of low latency. On the other side, as you said, now the new Xeon family, the 4th generation, what we see is—and I mentioned that number—we are producing every day 10 terabytes of data. And now we need to—and that data isn’t really used, it’s used partly, as I said, maybe on the X-ray of our PCB lines—but I think we can do much, much more leveraging of this data. But this data—no controller which is controlling the process was made, ever, for handling this data, computing this data, storing this data. So, but you see a lot of customers which say, “Well, I don’t want to move all the data in the cloud because it doesn’t make sense. I want to use it on premise. Yeah, I want to use it in the factory.” So, for that we see the trend of micro-industrialized data centers, which are not in a room, but maybe even close on a cabinet, close to the line, to the machine, which can compute an immense amount of data.

So that was a reason why we expanded our portfolio, which was currently more on the PLC side, on the industrial PLC side, now really two kinds of data center–like equipment for that high workload on AI, on digital twin, on simulation. And we see that immense—for machine-vision application is also another workload which consumes a lot of compute power. And for that we came to the conclusion, we will bring out a new portfolio leveraging the 4th Gen Xeon® Scalable platform. Looking forward to introducing that in the market pretty soon, in the middle of 2023. So, very excited having that new portfolio element, addressing exactly that need we see on the shop floor.

Christina Cardoza: Exciting stuff. I’m excited for when that release comes out to see what else you and your customers come up with, and how these use cases are going to expand and just advance over the next couple of years. It’s been a great conversation, and unfortunately we are running out of time, but before we go, I’m just wondering if you have any final key thoughts or takeaways you want to leave our listeners with today.

Rainer Brehm: First of all, I strongly believe that for a sustainable future you need to electrify, automate, and digitalize. And, therefore, what we do together with Intel really is a significant contribution for our future. So, number one. Number two, I believe the area of automation will expand more and more while we automate workload, which is unpredictable and maybe a “Lot Size One,” very individualized. And, thirdly, we need to make this technology accessible, available for, I wouldn’t say unskilled people, but make it as user-friendly as possible that OT persons can handle this complex technology. And these are for me the main three topics, and I’m very happy for further collaboration and working on this vision together with Intel.

Christina Cardoza: Absolutely. And we’ll be on a lookout for that new portfolio or solution you just mentioned that leverages some of these Intel® Xeon® processors, some of the new releases coming out, because I think that’s just going to be so huge for the industry and solving these pain points and trends.

But it’s been a great and insightful conversation, Rainer. Thank you again for joining us, and thank you to our listeners for tuning in. If you liked this episode, please like, subscribe, rate, review, all of the above on your favorite streaming platform. And, until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

OT SIs Unlock New Opportunities with Process Automation

Operational Technology (OT) systems integrators must weave a complex web of partnerships among hardware, software, and cloud service providers to deliver complete, end-to-end solutions to their customers. So deploying AI-powered predictive maintenance systems—a must-have for manufacturers—can be out of reach for many OT SIs.

Working with a solutions aggregator such as Arrow Electronics, they can overcome these challenges and find new opportunities. Arrow collaborates with best-of-breed technology providers to deliver full-stack solutions. It does the costly and time-intensive groundwork of finding, sourcing, testing, and integrating all the system elements.

One example is the Senseye PdM AI-Powered Predictive Maintenance Solution (Video 1). Arrow worked with Senseye to build an edge-to-cloud solution, which includes preconfigured hardware, sensors, and analytics software, as well as expert setup and support.

https://www.youtube.com/watch?v=uWz5yP0_CQk

Video 1. The Senseye PdM solution, delivered by Arrow, provides AI-powered predictive maintenance at scale for manufacturing and industrial organizations. (Source: Senseye)

Predictive Analytics Reduces Operational Downtime

Arrow chose Senseye not only for its leading AI technology but also because the company excels at folding predictive maintenance into end customers’ existing operational processes.

“Successful projects are as much about integrating the solution into the customer’s processes as they are about the technology,” says Andy Smith, Technology Director for Arrow’s Intelligent Solutions Business in EMEA. “Senseye works with the business and operational teams to gain buy-in, then moves through the entire workflow with the goal of demonstrating ROI in a measurable and sustainable way. It’s a very solid eight-step process that can be delivered not only by Senseye but also by systems integrators. They love this because they’re helping customers through the digital transformation process—and adding a lot of value.”

Alcoa Corporation, a global leader in aluminum products, has partnered with Senseye. Alcoa’s business objective was to adopt equipment maintenance best practices, including moving from planned to predictive maintenance.

Initially, Alcoa implemented the Senseye solution at a remote aluminum smelter plant in East Iceland as the first site in a global deployment. Designed as a zero-waste-to-landfill project, this site is among the most environmentally sustainable facilities of its kind.

Connected to existing maintenance systems, the PdM solution analyzes machine condition indicators, providing automatic alerts and diagnostics in advance of functional failures. For example, the system can warn off-site personnel of loose components in saw motors weeks before they could cause problems. The plant reduced unplanned downtime by 20%, improving operating efficiencies and lowering maintenance costs.

On the factory floor, the #AI-powered PdM industrial #edge computer taps into existing embedded process controllers with PLC #data sources, gathering data for real-time analytics. @ArrowGlobal via @insightdottech

Manufacturing Process Automation with Edge AI

On the factory floor, the AI-powered PdM industrial edge computer taps into existing embedded process controllers with PLC data sources, gathering data for real-time analytics. Manufacturers can also add new sensors to equipment that lacks PLC components, allowing managers to glean details on machine temperature, vibration, current load, and more.

With help from the Intel® OpenVINO Toolkit, system software uses AI inferencing to prefilter and normalize all sensor data, converting it from a proprietary format into a cloud-ready protocol. The Intel® processor-powered machine, designed to operate in harsh industrial environments, backhauls the data to the Senseye platform. Combining data from various sources into analytics reports and customizable dashboards, the cloud-based platform gives manufacturers the ability to scale the solution easily across multiple sites.

Not every AI algorithm is alike. Most predictive maintenance solutions start with scanning sensor data for anomalies, flagging the unusual reading as a sign that the machine is in need of attention. “The problem with this approach is that almost every machine has a different signature from the next, even if they use the same type of motor and drive,” says Smith.

Senseye takes a more nuanced approach called fingerprinting, which recognizes that every machine is different. After a short period of time running on any device, the AI system builds up a picture of its individual profile—known as a “fingerprint”—that becomes the model for its future health. When the machine starts to drift from the fingerprinted model, the system sends out an alert that maintenance is needed.

“Senseye has spent years researching and refining this approach,” Smith says. “It’s a point of great difference for them and adds a lot of value to the solution.”

OT SIs Unlock New Opportunities

Working with an ecosystem of OT SIs, Arrow ensures that these partners benefit from faster time to market and new business opportunities—while their manufacturing customers improve operations and save money.

And as adjacent technologies such as computer vision and private 5G networks emerge and impact the manufacturing space, Arrow will continue to deliver new solutions around anomaly detection, worker safety, and cybersecurity.

“We’ve seen a clear need in the market for someone who can play this role in orchestrating solutions through OT SIs,” says Smith. “We’ve got lots of work to do as manufacturing undergoes digital transformation, and we’ll continue to innovate.”

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

2023 IoT Influencers to Watch

2023 will be yet another transformative year for the Internet of Things. And as the source for the latest and greatest IoT business solutions and trends, insight.tech rings in the new year with the top technologies, strategies, markets, and designs you can expect.

Stay up to date with all the industry trends, news, and event coverage from edge computing to AI development to sustainability, and make sure you follow the hottest influencers and top thought leaders in the IoT space.

Dr. Marcell Vollmer – 5G

Twitter

LinkedIn

After advising C-level executives on digital transformation and innovation—with a focus on procurement, supply chain, and global operations—you’ll find this tech industry focused keynote speaker tracking the latest trends over a strong coffee.

Helen Yu – AI

Twitter

LinkedIn

Named a “Top 50 Women in Tech,” this Fortune 500 advisor and Wall Street Journal bestselling author can be found at the intersection of tech and humanity. Yu is also the host of an AI-based YouTube Spice Talk series.

Jean-Baptiste Lefevre – Edge Computing

Twitter

LinkedIn

This tech influencer and growth-hacker is currently head of digital and social media, CDO, at Cho You. He’s your go-to for AI, robotics, machine learning, 5G, and more!

Paula Piccard – Cybersecurity

Twitter

A top woman in tech is your primary source for staying on top of cybersecurity in AI. From cybersecurity threats to types of hackers, to breaking down deep web vs. dark web, she’s the one-stop shop for cyber safety.

From #EdgeComputing to #AI development to #sustainability—make sure you follow the hottest influencers and top thought leaders in the #IoT space. via @insightdottech

Linda Grasso – Sustainability

Twitter

LinkedIn

This digital creator and tech influencer is the founder and CEO of the popular digital transformation blog DeltalogiX.

​​Ronald van Loon – Vision

Twitter

LinkedIn

A world-renowned, top-10 influencer in AI, this CEO helps data-driven companies generate value. Van Loon is also one of our favorite follows for industry event coverage.

Antonio Grasso – IoT

Twitter

LinkedIn

Entrepreneur, technologist, and B2B digital creator, this founder and CEO of the digital business transformation consulting firm Digital Business Innovation Srl is also passionate about sustainability.

Jim Harris – IIoT

Twitter

LinkedIn

From #1 international bestselling author to keynote speaker, this disruptive-innovation thought leader and tech influencer is a must-follow for all things digital transformation and much more.

Kirk D. Borne – Data Science

Twitter

LinkedIn

Looking for the ultimate cheat sheet on what you need to know about data science? Borne, founder of  Data Leadership Group, is your answer. The PhD in astrophysics is just a bonus.

Harold Sinnott – Cloud

Twitter

LinkedIn

Talk about a mover and shaker in the technology-influencing space, this author, coach, and speaker has his finger on the pulse of the latest in AI technologies and IoT. Sinnott is another one of our favorites for the most relevant industry event coverage.

Kevin L. Jackson – Supply Chain

Twitter

LinkedIn

This two-time USA Today and Wall Street Journal bestselling author and host of the popular podcast Digital Transformers, Jackson is your go-to for all things supply chain. You can also catch him instructing in the new Cloud Champion Certification Program.

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

The Medium Is the Message for Brick-and-Mortar Retail

During the pandemic, work took place on Zoom, school went remote, and shopping definitely happened online. In the aftermath, brick-and-mortar stores were left rethinking how to get—and keep—consumers shopping in the physical world. We all appreciate convenience; we’d all prefer to skip products and messaging that hold no interest for us. So where can you find this optimal retail experience? It might not actually be online.

Jay Hutton, Co-Founder and CEO of VSBLTY, a retail-technology solution provider, believes that computer vision could be the answer to the online vs. on-the-ground dilemma. He’ll talk about the future of brick-and-mortar shopping in a digital world, retail’s place in the omnichannel experience, and the benefits of digital signage to the consumer—a.k.a. you and me.

How have physical stores had to compete with online shopping over the past few years?

Physical stores are not dead, nor are they dying. Consumer behavior has changed, for sure, resulting in a certain amount of commerce being fulfilled online. But that doesn’t mean the store is dying; it’s evolving. The pandemic has caused retail to really look at the consumer experience, and to modify it in a way that—I don’t want to say it’s more like online—but it’s more like online. It delivers immediate response and immediate engagement in a way that brands value and consumers value.

There’s this merging of online with offline that requires the store to reinvent itself by embracing digital more, having more consumer engagement, and being more consumer-centric. I think this is a challenge to a lot of traditional retailers, but I’m delighted to report that they’re really stepping up to that challenge.

What do we mean by “Store as a Medium?”

The store has always been a medium for messaging. In the past that took the form of poster boards or stickers on the floor in front of the Tide detergent. And that was meaningful in the way it redirected brand spend: Brands spend money to drive impressions at the point of sale—at the moment of truth when you are most likely to be influenced by a message.

What’s different in the past two or three years is how all that’s becoming digital. We’re talking about stores embracing digital surfaces: It could be a digital cooler; it could be an endcap that’s got a digital screen embedded in it; it could be shelf strips that are interactive and drive attention, gaze, and engagement at the point of sale.

These are all ways in which stores are investing in and embracing turning the store into an advertising medium. We know that the internet is an advertising medium. We know that a billboard on the side of the highway is an advertising medium. We’re now at a point where the store itself is an advertising medium, or channel. When the big brands, like Unilever, Coca-Cola, and PepsiCo, are making decisions on which channel to invest in, now the store is a legitimate option because it’s where the consumer makes decisions. It’s where the brand can deliver its narrative and the consumer can be impacted, which is really valuable. This is exactly what Store as a Medium is: intimate engagement with the consumer.

There’s this merging of online with offline that requires the store to reinvent itself by embracing #digital more, having more #ConsumerEngagement, and being more consumer centric. @vsbltyco via @insightdottech

How does Store as a Medium fit into the retail omnichannel experience?

As so often happens, we were waiting for the technology to catch up to the demands of the marketplace. But now we’ve got computer vision and the ability to draw inferences; we’re looking at audiences and deriving meaningful data. How many men, how many women, how many 25-year-olds, how many 35-year-olds? (And this is not privacy data—not data that would make any of us feel creepy—but data that is relevant to a brand.) We all knew that once we cracked that code it would realistically open up the store as a valuable medium, as one of the channels in “omnichannel.” It wasn’t before, and now it is.

Now we’ve got this opportunity to drive really meaningful insights—it’s the data dividend. Not only are brands interested in delivering advertising at the point of sale, they’re interested in lift; they want to sell more stuff. And they’re interested in this unbelievably complex and robust data set that they’ve never had before, one that allows them to segment, to laser-focus, and to understand their customer engagement much more acutely than ever before.

What benefits might customers get from this situation?

If consent is not secured from the customer, there’s still a lot of very focused marketing that can be delivered to that person as a member of a group—a gender group or an age group, for example.

But when there is consent, now maybe there can be a loyalty app aligned with what’s going on on the digital display. If that consenting customer gets personalized advertising, gets choices on brands that they already have a preference for, now it can be more meaningful to them as a consumer. That’s what’s in it for the customer. Now it’s not just a general broadcast—shotgun advertising; now it’s laser focused: “Jay likes Coca-Cola more than he likes Pepsi, so I’m going to drive digital coupons.” Or “I’m going to drive a campaign promotion to him specifically because of his brand affiliation and because of his brand interests.”

What other kinds of retail use cases might there be for these digital-signage solutions?

If there’s one brand category that can afford the investment in digital infrastructure, it’s health and beauty. The margins are out of this world. It has a problem right now getting enough skilled labor to perform the educational role at the point of sale, so health and beauty can invest in the digital infrastructure and the ROI is almost immediate. The adoption that’s happening there is outpacing everything else, because of that ROI. This doesn’t necessarily mean a conflict with a grocery deployment or a big-box deployment; health and beauty can be co-resident, and they can do it together.

How is VSBLTY actually making this happen?

That is perhaps the most complex part of the business model. Generally, retail business runs on a 3%-4% gross margin. So what is the probability that most retailers are interested in a multimillion-dollar capital infrastructure investment for digital overlay? Almost zero—unless you’re Target or Walmart or one of the really big players.

The hypothesis was that if a group of us—called the Store as a Medium Consortium—could get together and solve that problem on behalf of retailers, therefore creating a media infrastructure—capitalizing it, deploying it, managing it, even doing brand-demand creation for the media network—it simplifies the retailer’s value proposition. We said, “You don’t have to do anything. We’ll open up the doors.” VSBLTY relieves the responsibility of investing in the infrastructure from the store.

Our largest deployment is in Latin America, along with Intel and Anheuser-Busch. Together, we’re building a network that will reach 50,000 stores by the end of year four. If we reach that objective—and I firmly believe we will—it will be the largest deployment of a retail-media network on the planet. And if we can do it there—in a 10-square-meter convenience store on the side of a dirt road in Guadalajara—it gives us a leg up on doing it in places with a less challenging environment.

Boston Consulting Group says this will be a $100 billion market by 2025—it’s under $5 billion today. Even if that statement is hyperbolic, we know it’s exploding. This is no longer a whiteboard exercise; it’s “We’re doing this now.”

How has your work with Intel made Store as a Medium possible?

Intel has enormous global reach. If we’re having a particularly difficult time reaching the C-suite of a retailer, Intel can get there because they have a team dedicated to ensuring thought leadership. Of course, at the end of the day, Intel wants to move silicon—and it has proven leadership in delivering powerful, high-capacity processors at the edge. But you would be surprised at the level of expertise there—subject-matter expertise, vertical expertise—and we lean on Intel all the time.

There’s also the legitimacy they give to us. We’re a side-by-side partner, and proud to be the 2022 Intel Channel Partner of the Year. Intel also has a track record of putting its money where its mouth is: When it comes time to really drive that thought leadership, Intel will always be there with us, assisting us wherever we need it. We’re enormously gratified to be in that position.

What types of technology investments does Store as a Medium require?

Everyone has a fantasy that existing infrastructure can be leveraged, therefore lowering the total capital expenditure. But generally speaking, that’s not the case. The Wi-Fi in a Target or a Kohl’s or a Walmart usually sucks. But if you’re driving new content you need internet access, and we would have to deploy on top of the in-store Wi-Fi in order to get the bandwidth we’d need. Cameras and networks obviously also exist in retail for loss-prevention purposes. But those are generally up in a ceiling and looking down on heads, not directly at faces.

So for the most part this is new build. But new build, I should hasten to add, that we’re removing the capital-expense responsibility for from the retailers. So if they deliver us a large enough number of stores, we’ll go and assemble the capital necessary to make it happen.

What can we expect from Store as a Medium going forward?

It’s no longer conjecture; we’re now looking at large-scale deployments. If you ever doubt the veracity of this category, just look to Amazon and Walmart. And if you’re in retail and you’re not afraid of what Amazon and Walmart are doing and you’re not, then you’re just not paying attention. The challenge now is speed—the speed with which adoption can be secured, deployment can be secured, and revenue can start to happen. It’s a land grab at the moment.

Anything further you’d like to leave us with?

Strap in, because your retail experience is about to change. There’s going to be more for you on your customer journey. If you decide to opt into a loyalty program, it will become profoundly more personalized. And that experience will extend to your home, if you wish it to.

The whole customer journey, that whole engagement modality, begins at bricks and mortar; it cannot begin in an online experience. So the entire experience will change, but brick and mortar is not going anywhere.

Related Content

To learn more about ongoing retail transformations, listen to the podcast Reinventing Smart Stores as a Medium: With VSBLTY and read Retail Digital Signage Gets an Upgrade with Computer Vision. For the latest innovations from VSBLTY, follow it on Twitter at @vsbltyco and LinkedIn.

 

This article was edited by Erin Noble, copy editor.

Building Infrastructure for the Transportation of Tomorrow

Related Content

To learn more about smart city management, read Building Smart Spaces for Communities and listen to the podcast Smart Spaces for Smart Communities.

Transcript

Corporate Participants

Christina Cardoza
insight.tech – Associate Editorial Director

Ken Mills
EPIC iO – Chief Executive Officer

Maurizio Caporali
SECO – Chief Product Officer

Sameer Sharma
Intel – General Manager, Cities and Transportation

Presentation

(On screen: insight.tech logo intro slide introducing the webinar topic and panelists)

Christina Cardoza: Hello and welcome to the webinar on Building Infrastructure for the Transportation of Tomorrow. I’m your moderator Christina Cardoza, Associate Editorial Director of insight.tech, and here to talk more about this topic, we have Ken Mills from EPIC iO, Maurizio Caporali, from SECO, and Sameer Sharma from Intel.

So, before we jump into the conversation, I just want to get to know our guests a little bit more. Ken, I’ll start with you. What can you tell us about EPIC iO and your role there?

Ken Mills: I’m very excited to be here, thanks for having us. I am the CEO of EPIC iO, which is a combination of connectivity, AI, and IoT solutions that we bring together to help make the world a safer, smarter, more connected place.

Christina Cardoza: Great, and Ken, I’m interested, because last time we talked, you guys were IntelliSite, and so you’ve recently had a brand change to EPIC iO. So, I just want to discuss a little bit, what was the change before we talk a little bit more about the infrastructure for the future of transportation?

Ken Mills: Yes, great question. So, we were very fortunate to have combined through additional investments around connectivity, private LTE, private 5G, as well as our computer vision investments, multiple companies into EPIC iO. So, we created a rebrand to bring all of those entities together, all of the missions together, all around creating a safer, smarter, more connected world. So, we wanted to transform internally, as well as transform externally, so we rallied under a new brand EPIC iO, and we’re very excited about it because we use the hashtag #EPIC for a lot of different things, which is a great hashtag.

Christina Cardoza: And Maurizio from SECO, welcome to the show. What can you tell us more about yourself and the company?

Maurizio Caporali: Hi, everyone. I am Chief Product Officer at SECO. I am involved in the activity of service and application design on top of electronics and hardware embedded, in particular for service development based on AI and UT, to enable new application or new services in the field of industrial devices.

Christina Cardoza: Great and last, but not least, Sameer from Intel, please tell us more about yourself.

Sameer Sharma: Hi, folks. I’m Sameer Sharma. I’m the Global General Manager for Smart Cities and Intelligent Transportation at Intel. What that means is my team has the responsibility to pull together all the investments Intel is making in technology, everything from AGI and computer vision, to network connectivity for things like private 5G as well as public 5G, all the way to the cloud part of an end-to-end solution, and make sure that our partners like EPIC iO, as well as SECO, have the best of what Intel has to offer in their end-to-end solution formation.

Christina Cardoza: Great to have you guys all joining us today. Let’s just take a quick look at the agenda.

(On screen: Webinar agenda with image of city roadways)

Today we’re going to be talking about what we mean by smarter, safer, and more connected roads, how sustainability plays a part in this, the infrastructure, tools, technologies necessary to support Smart City traffic management, and then what we can expect in the future. So, let’s get started.

(On screen: Safer, Smarter and Connected Roads slide with illustration of data points over roadways)

Making cities smarter and more sustainable, it’s been a big trend over the last few years, and there’s multiple different ways communities have been going about this. For this webinar, I want to focus on the traffic management aspect of this, and Ken, I would love to have you kick off our conversation today, if you could give us an overview of the state of traffic management, how it is evolving, and how it should continue to evolve.

Ken Mills: Traffic management is an interesting state today because if you think about COVID, and its far-reaching impacts on every part of our lives, traffic is no exception to that rule. We know that traffic patterns today have changed drastically, with a lot of people working at home, different shifts, different dynamics, some people full-time, some people hybrid, some people doing virtual across all of their different workplaces. It completely changed how people move about cities, and how they go from place to place, and how traffic plays a role. So, all of the data that exists to pre-COVID is really no longer valid because traffic patterns are completely changed. So, we’re in this new world of what does post-COVID traffic look like, and how do we optimize around this post-COVID traffic? You combine that with the growth of the future of autonomous vehicles, you combine that with the change in needs in infrastructure across the globe, and people moving back to urban environments, all of these things have compounded the need for communities to really rethink and reevaluate how they are deploying their smart traffic solutions.

Christina Cardoza: Absolutely, and I love how you bring up the community changes in relation to COVID, how things have changed, and why that’s having an impact on city and traffic management patterns. But there’s also, I think, some government and regulation pressures coming that is also dictating a little bit of these changes. So, Sameer, I’m wondering if you can expand on some ongoing global efforts of government regulations to improve Smart City traffic management?

Sameer Sharma: Absolutely, Christina. I think Ken laid out the current state of affairs very well. There is both a pressure to fix what’s broken, but also a lot of encouragement from the governments around the world to improve the state of our traffic infrastructure. So, let’s focus on the US for a moment.

In 2021, we had more than 42,000 road fatalities. The expectation was that with less traffic post-COVID, this number should go down, but it went in the opposite direction, and that’s disheartening and very concerning for a couple of reasons. First of all, we’ve been pursuing this idea of Vision Zero, the idea that even one traffic-related fatality is one too much. So, we need to get down to zero deaths, absolutely no accidents that result in somebody losing their life. Yet, despite all those efforts, the numbers are actually heading in the opposite direction. So, I think that’s where the governments are stepping in and saying we need to do a better job figuring out why that is happening, getting to the heart of the cause, and fixing it. But there is a positive side to the government involvement as well.

If you look globally, there is a lot of investment happening in infrastructure. In the US, we hear about the Infrastructure and Jobs Act, which is about $1.2 trillion of investment, about half of that is transportation. Now, it goes into the different modes of transportation, traffic management, board operation, better airports, bridges, and in general overall infrastructure, but still, that’s a massive amount of funding, but this is by no means unique to the US. China has been investing in its infrastructure for decades now. Their forward-looking spending is about $2.5 trillion. In India, there is a program called Gati Shakti, which loosely translates into the power of speed. It’s a play on words, what it means is that they want to put a lot of infrastructure in place to help people and goods move quickly, and they want to do this exercise quite quickly so that the infrastructure upgrade is happening at a very fast pace. In Europe, we have something called Common European Fund, which is about $900 billion worth of investment.

To give you a couple of data points on how this is spurring a completely different level of infrastructure revitalization, I talked about the investment in India, the National Highway Authority of India is overseeing construction of roughly 30 kilometers of highway every day. This is new highway been laid out every day. We have not seen this level of investment for decades. I think in the US, the last time we saw something like this was likely in the 1950s when the Federal Highway Act was passed, that spurred the creation of the entire interstate highway system in the US.

So, now bringing it back to what it means, I think it’s both a responsibility and an opportunity for the ecosystem to step up and start thinking about, hey, this physical infrastructure of the past has to be looked at as physical plus digital infrastructure if you think about what it means going forward. If you don’t do that – and I’ll share some more examples in the conversation as we go along. If we don’t do that, I think we’ll be making a huge mistake in how this massive amount of public funding is deployed.

So, with that, I’ll turn it back to you, and then we will cover more of these details as we go along.

Christina Cardoza: Yes, absolutely, and we’ll get into how the infrastructure is changing, and the tools and technologies making it possible. But in addition to the safety aspect, and making sure that roads are clear and safe, and that we’re collecting all those data, there’s also a big sustainability aspect in all of this.

(On screen: Driving Toward Sustainability slide with image of electric vechile being charged)

And I think the rise of electric vehicles, that’s one of the biggest things happening in Smart City management, especially as a way to hit some of those sustainability goals. So, Maurizio, I’m wondering if you can talk about the role of the smart electric vehicle adoption and how this factors into this conversation today?

Maurizio Caporali: Yes, electric vehicles is actually a very important point for building sustainability and for the Smart Cities, and for communities that live in these cities. It’s very important from this point of view, the infrastructure part, as mentioned by Sameer. The important aspect is that the electric vehicle change changed the way to think of service and infrastructure. The electric vehicles are very connected, very powerful vehicles with many sensors, with the potentiality to manage data and information. And this is very important for the city infrastructure, and also to define new kinds of services.

To help these, there are different kinds of technology inside the city that can be defined by sensor distribution inside the cities, not only also on the infrastructure point of view, the possibility to change the way of charging the vehicle or to have other kinds of information on services related on parking, on traffic management. On the way of EV charging, it’s very important to define solutions that are very flexible, and that respect, also, the participation of citizens on specific services and applications.

For this, it’s very important to define the application on top of EV charging to have a solution that is not only like the actual fuel-gasoline or something to enable the car, but also to give more information, to add different kinds of services, also to change, in this specific aspect, some business models regarding the mobility. Mobility will change a lot, will change a lot to thanks to this solution, to the intelligence on the Edge, to the potentiality of the electric vehicles, not only also in electric vehicle charging stations, where the electric vehicle charging stations can take many information regarding parking aspects, regarding the current status, and also regarding the traffic management, the traffic status. This is a very interesting point where it’s possible to define new services and new applications.

Christina Cardoza: Let’s talk a little bit more about the infrastructure that you mentioned. Electric vehicles, they haven’t hit mainstream adoption just yet. So, what does a city need to do to prepare? How important or what does it take to set up that electrical vehicle charging infrastructure to help make this more mainstream and help increase adoption?

Maurizio Caporali: Yes, as probably everybody knows, the electric vehicles, or the charging of electric vehicles is a complex task in the sense that you need a long time to charge completely a car. There are many technologies that can help to optimize this, like a fast-charging system. In this case, intelligence inside the fast charger inside the EV charging is very important. This is an important aspect that we can solve thanks to our technology, thanks to Intel technology to optimize the conversion of energy, and not only this, also to optimize the solution and application and service related on the optimization of charging for parking, to have information regarding the enabled infrastructure, where it’s possible to charge the car in the right way, to receive also the information directly in the car with the open standards, with completing their connectivity solution. That can be enabled by different kinds of wireless connectivity between cars and infrastructure, and in some ways, machine-to-machine infrastructure. Not only this can go in the direction also of human-to-machine interaction thanks to different kinds of applications and solutions that can permit the citizen to have control all of the information of the street, of the road, and the status of traffic, and also the possibility to charge in an optimized way the car, and to have an overview of the city’s status.

Christina Cardoza: So, it sounds like there’s a lot of sensors and connectivity that needs to go into this to make it possible and to make it beneficial for Smart City management as well, and so I want to take a look at the infrastructure as a whole, and how things like electrical vehicle charging and traffic management data patterns, how this all connects together.

(On screen: Building the Infrastrucutre slide with imafge of cars driving on highways)

Ken, I’m wondering if you can talk about the type of investments that need to be made in the traffic and road infrastructure to include electrical vehicles and all the data that you mentioned, and to actually reach the benefits of smart traffic management that we’re talking about today.

Ken Mills: Yes, the challenge exists not just at the intersection, but all the way down to the highway. Think about electric vehicles and the charging infrastructure. What happens if your vehicle runs out of charge on the highway? Typically, states will send vehicles out there with gas, give you a little bit of gas, help you get to a gas station, get filled up. How do you do that with electric vehicles? So, cities and states and communities are thinking about how do we take electrical charging to vehicles in emergencies, or in situations where maybe they just ran out charge, maybe didn’t plan their trip correctly. So, infrastructure extends beyond the intersection all the way out to the full interstate ecosystem and beyond.

So, one of the things that we’re seeing from communities as a whole is moving from point-in-time datasets to real-time datasets. So, if you think about a lot of the traffic studies that have been done historically, they go to intersection to intersection, maybe roll out the famous cable that people run over and understand what kind of utilization rate is going on at that point in time. It might be done on a quarterly basis, it might be done on an annual basis, but the world is changing so fast, and how we use our roads are changing so fast, that we need to move away from these point-in-time solutions to more real-time datasets, and computer vision, powered by Intel at the Edge, is one of the best ways that we do this currently, by leveraging OpenVINO stack at the Edge. At the intersection, we can give communities real-time analysis of what the utilization rate is, by what type of vehicle, pedestrian utilization, wheelchair utilization, how long does it take people to get across intersections, how many people are actually using those scooters that you see everywhere downtown, how many people are in multi-person vehicles, how many big trucks are coming through the intersection, how do you do real-time route planning, real-time response to traffic, and then combining that with sensors is also becoming super important.

So, understanding what the impacts of air quality are at individual intersections based on the type of traffic coming through the intersection, so back to that sustainability conversation we had earlier, as well as really interacting with people. Distracted drivers are a huge issue with people looking at their phones, looking at their screens, not paying attention to where they should be, but it’s also distracted pedestrians. People are looking at their phones as they walk through the crosswalk, and they’re not paying attention that there might be a vehicle coming in or coming towards them at a high rate of speed. So, using computer vision and traffic technology to alert both the driver when you get to machine-to-machine, as well as the person crossing the road, that there might be a situation where they could get hurt. Unfortunately, we had a pedestrian fatality not too far down the road from Intel just a few days ago, where you had a distracted driver situation and someone hit in an intersection and a fatality occurred. The more that we can provide technology to communicate to both ends of that spectrum, the better chance we will have in reducing that pedestrian risk and that pedestrian fatality, and get to that Vision Zero that Sameer talked about.

Christina Cardoza: Yes, I hate hearing stories like that, but it’s good to see efforts like this trying to prevent situations like that from happening, and we mentioned in the beginning how there was a need to change the data patterns that we’re looking at, and we talked about computer vision, how this technology is going into that. But I’m wondering about the underlying infrastructure and technology, is there anything new that needs to be added to this, or how do we utilize the existing infrastructure that we do have? So, Sameer, can you talk a little bit more about the necessary technology components to really be successful in this?

Sameer Sharma: I think Ken articulated very well how computer vision has become a very fundamental and critical technology in implementing these use cases by being our sensor to understand what’s happening around us. But if you take a step back, if you look at traffic intersections globally, we’ve got just under a million traffic intersections, and these intersections tend to be both the choke point and the number one place where fatalities tend to happen. Not surprisingly, it’s very intuitive, traffic intersection is where cars are heading in different directions, pedestrians and vehicles are hopefully interacting in a smooth manner, but that’s where the possibility of an incident is the highest.

In addition to what Ken described, I think there is an existing opportunity to connect both the sensing part and the control part at the traffic intersection. Let me expand on that. Today you literally have two different systems. You’ve got roadside units and roadside equipment, which may be sensing what’s going on using cameras. The primary use cases today are, is somebody jumping the red light, is traffic congestion starting to happen, just basic fundamental analytics. And then you have a second system, which is controlling the red light, green light timings at an intersection. Now, on average for a US intersection, these timings are adjusted, based on the studies Ken described, once every seven years. Ideally, they should be done once every six months, and preferably, they should be done real-time. We have everything we need, the control system, the sensing system, right there at an intersection, we just need to glue them together, and that’s where I think the combination of compute capabilities like computer vision, but also connectivity, whether it’s today, it could be an LTE, tomorrow it could be public 5G, that allows us to connect all these capabilities to each other, but also to the cloud to do real-time analytics. Batch analytics in the cloud, real-time analytics at the Edge becomes critical.

The second thing, your question was about forward-looking view. Today’s computer vision, I think we’re seeing increased adoption of capabilities like LiDAR and RADAR, creating what we call sensor fusion capability at the Edge. That’s giving us even more fault-tolerant data on what’s actually going on at the intersection. So, I think there’s a ton of such innovations that’s already available today, and a ton more coming our way.

The final thing I want to add to this is C-V2X I think will be a very critical technology. It’s going to take some time to happen, and C-V2X refers to the fact that vehicle-to-vehicle, vehicle-to-infrastructure communication to understand what’s going on, and intelligently interpret and adjust the overall traffic flow is going to become the default way of operating in the future.

Before I wrap up, there’s something I want to share, extend… say to extend something that Maurizio talked about on the EV charging side. I’m part of what is called the MIT Mobility Initiative, and Professor Sadoway challenged us in one of the sessions we had on rethinking about the EV charging infrastructure, and his statement was quite provocative, which I’m happy to share with the audience here. He said, do you think when you fill gasoline into your car, that pump is directly connected to the refinery? And that made us rethink how EV charging and the ability to charge needs to be a bit more distributed than the current model of everything taps into the electric grid all at the same time, versus having a more distributed system that doesn’t put all this load on the electric grid when at 5:00 p.m. people come back and they all start charging, and so to manage supply matching using the distributed model will be another interesting thing to look at, as we look towards the future.

Christina Cardoza: Absolutely, and I can definitely see the benefits and why these changes need to happen now. But Maurizio, I’m wondering, as we’re making these changes, as new technology comes available, I think there’s always a concern as are these changes going to support us into the future. Is building out the electric vehicle infrastructure going to be relevant in a couple of years? So, I’m wondering if you can expand on how we can ensure the changes we make today really benefit the transformation landscape of tomorrow.

Maurizio Caporali: Yes, more in general, the important aspect regarding this is the flexibility and the power of the Edge devices. This can be related on EV charging, but also can be related on specific Edge devices for traffic management or status in the Smart City for different kinds of applications. In the same way what happened in the last period from an electronics point of view is the important change that is done by the power of the chip, the less consumption, and the possibility to perform a real-time analysis from different kinds of sources. This is very important, and also for EV charging, you have a set of input devices that is cameras, ambient sensors, that is microphones, and output display, audio, et cetera. This technology can help to manage all the sensors and all the information there are in the physical space, and the final new kind of service. But it’s something very similar to what happened on the actual smartphone, because we have CPU, GPU, and AI chip integrators, and the possibility to analyze a big quantity of data to define a new application. On the same way, also, for EV charging, the future could be the possibility to have an application that will change during the time, the possibility to define a new application and new services that can be deployed in a specific part of the city, for example, for a specific event and the specific – there is an event in the stadium and there is the possibility to update the EV charger nearby the stadium with different kinds of services and different kinds of applications that can appear on the display, or directly the application to manage the device.

This, for me, is a very important, key point for the future of the system, because it’s something that changes during the time. It’s not a device that has a specific functionality, and this functionality will stay the same during the time. Also, this functionality can be changed, respect the data that are acquired in the physical space. This is another important aspect because we can change the way to approach the EV charger or the infrastructure, the specific city infrastructure, respect the data that are taken during the time, and this can change the way to interact with the end user, and this could be one of the key points for the future.

Christina Cardoza: So, I love hearing all the changes in the technology necessary to go into all of this, but one of my favorite things in talking about this is learning how this is actually being put into practice.

(On screen: Paving the Way to Smart City Traffic Management with image of a road being paved)

So, I would love to hear, Maurizio and Ken, if you have any case studies or customer examples you can share of how they’re implementing your technology, what the benefits have been, and how you continue to work with them. So, Ken, I’ll start with you on that one.

Ken Mills: I’ll give you two good examples. One, Sameer talked about that vehicle-to-infrastructure communication opportunity. We were very fortunate to work with the City of Sacramento and Verizon on one of the first ultra-wideband 5G examples of this in the real world. So, what the City of Sacramento did to really help impact Vision Zero is build, from what I’m aware, one of the first early solutions where the traffic signal through computer vision cameras would identify vehicles coming into an intersection, identify pedestrians crossing the intersection, and if they saw a situation that could be at risk, would actually communicate to the vehicle directly through ultra-wideband communication to get that very quick, almost near real-time communication tp the vehicle to, hey, you need to stop because you’re about to intersect with a person and potentially cause a pedestrian fatality. So, that was a great example of taking connectivity, sensor data, computer vision, vehicle-to-infrastructure communication to deliver a very meaningful outcome. Now, that was only across the city vehicles, think about how that would impact the entire community if it went across all vehicles, which is where I think the future is going.

Another example is bringing that sustainability aspect into it. So, we worked with another city, the City of East Point, where they were looking at their infrastructure as a whole from a safety, traffic utilization, pedestrian utilization, but they also wanted to know what the air quality implications were at their intersections, and air quality data by itself at the intersections was interesting, but it didn’t really tell them why they’re having an air quality situation where it’s going worse or better. Tying that to computer vision at the intersections, you can then get real context as to what actually is happening at the intersection, what’s actually driving that negative change in air quality. Is it a large congestion of semi-trucks, is it a large congestion of traffic that’s different than you expected, back to those seven-year studies that Sameer talked about? How do you get real-time data and maybe divert trucks to a different path based on what impact it’s having on air quality in the community around those intersections?

So, these are things that cities are starting to think about, and starting to deploy, that can have really meaningful impacts for their constituents, and really change how people move about and interact with their intersections and infrastructure, and get even more benefit from them.

Christina Cardoza: Yes, I love that example, because it’s not – this technology is not just solving one thing. It’s not just solving the pedestrian safety or traffic management issues. It’s solving air quality. These things are all connected together and it’s finding, like you guys have all mentioned, the right tools and flexibility to be able to collect all this data, make sense of it, and then make actionable insights and decisions. Maurizio, did you have any customer examples or use cases you could share?

Maurizio Caporali: Yes, sure. One of the main important examples for our customers is the possibility for first to manage the status of the device remotely, in some way with respect to the service team with the possibility to analyze and to have information about all the devices. There are thousands of devices that are distributed geographically, and this is a very important aspect because it gives the opportunity to the service to have all the information, all the data directly remotely, and to also have specific AI algorithm and classification models that can help the team to take decisions in a fast way, in a simple way. And this is very appreciated by the customers because for a reduction of cost in some way different – from a different level. Also, with the possibility to go in the analysis to update the machine remotely and to solve the problem directly, or on the other end, to give the… In a rapid additive way to give the solution for a specific problem for the device.

Another important aspect is related on the possibility to update some specific… some specific application information. For example, the price for EV charging is possible to manage the pricing for different zones, from different geographical zones, or for the changes during the time period of the years, is possible to do this immediately. Not only in… by the service team, but also in an automatical way thanks to the possibility to connect with all different kinds of sources that came from the system and the fleet of devices on how to change the pricing, and this is also a very impactful service.

Christina Cardoza: Hearing some of these examples, one thing that comes to mind is it’s not even one company that’s doing this alone. It really takes partnership and collaboration with others to be successful in this Smart City management. Smart Cities are huge communities sometimes and there’s a lot going on, so it makes sense that there would be a lot of players in this, and Intel I know does a really great job of involving the ecosystem and working with partners. So, Sameer, I’m wondering if you can touch on the importance of working with others to make all of this possible.

Sameer Sharma: I would say I would qualify this as not just important but fundamentally critical, this idea that there needs to be a thriving ecosystem to make it all happen. And I’ll look at it from a couple of angles. The first is a partnership with ISPs, ODMs, OEMs, telcos, ODSIs, and so on, so forth, because this stuff is too important and too big for one or even a small group of companies to work together and implement everything. And certainly, I think, for me, the definition of our team’s success is simply our partners’ success. So, when Ken, when Maurizio come here and talk about the Intel platform, leveraging OpenVINO, the computer vision capabilities, the connectivity, to me that’s the definition of my success, to have them say that the work Intel is doing is helping them, is making life easier for them, is helping them deploy their solutions quicker, faster, at scale.

But there’s another angle to partnership, which is public-private partnerships. I mean, we touched on it a little bit, the government angle. I think it’s very, very important that both on the government side, as well as the private sector side, people reach out and build more bridges to understand how the government can facilitate, can be an enabler, whether it’s establishing standards, making infrastructure investments, or in general, simplifying the regulatory landscape to help the deployments happen faster. I’ll give you a simple example.

We partnered with the US DoD in the US, here in Virginia, and then also partnered with the local transportation agencies in Turin in Italy, for the world’s first multi-telco, multi-OEM trial of a smart intersection, and the idea was that irrespective of where your connectivity is coming from, because in most countries there are multiple telcos providing connectivity, you will be able to enable that safety use case that Ken touched on a little bit. So, I think public-private partnerships as well as a thriving ecosystem, both of those are absolutely critical, and if I take a step back the work we’re doing is about technology, it’s about things like revenue and margins, because that’s how companies run, but there is something more fundamental and more appealing in the work we’re doing. This work is going to touch virtually every citizen on this planet in terms of improving their lives, and to me, that’s the very definition of a partnership and the impact that partnership can have.

Christina Cardoza: Great, and Maurizio and Ken, you mentioned a little bit throughout the conversation how you’ve been working with Intel, and some partnerships that you’ve had. So, I’m just wondering if there’s anything you wanted to add about the value of your relationship with Intel or the partnerships you are working on throughout the ecosystem. Maurizio, I’ll let you start with that one.

Maurizio Caporali: Yes, the main aspect related to Intel is that Intel is an ecosystem of technology solutions. This is very important for us because it is extremely flexible from an electronic point of view. And on the other hand, also, from a software point of view. Our R&D works together with Intel for electronic parts, and also software R&D work on OpenVINO frameworks on the defining of examples and models that Ken and Intel give us. On the other hand, we have the possibility to re-design a solution to work together with Intel, and also to have very good support regarding this.

Another important aspect is the flexibility of this solution, of Intel’s solution, because we go from different kinds of projects that can be applied to our customer needs, and also with different levels of acceleration for AI, inference, with the possibility to adopt specific AI acceleration based on the Intel ARBOR that is very optimized and very good for our application. In this way, there is on top of these the possibility to collaborate on a specific service and define a specific vision for new products and new applications.

Christina Cardoza: Great, and since the power of partnerships are fundamental, like Sameer mentioned, Ken, is there anything you wanted to expand on about your work with Intel or other partners in the ecosystem?

Ken Mills: Yes, I’ll echo the flexibility. I mean, in this supply chain challenge that we’ve had, it’s a nice way to put it, it’s been great to be able to leverage the large ecosystem of partners that Intel has to be able to ensure that we can get the Edge compute infrastructure necessary for our customers so that they can deploy their solutions as quickly as possible on their timeline, versus being dependent on extended, unreasonable supply chain timelines that have hit a large part of the technology industry. So, that flexibility has allowed us to adapt and not have a situation where we couldn’t meet customer demand. So, that’s really been super important to us from a business perspective.

On the technology side, being able to leverage CPU with OpenVINO, and then extend that to the Intel GPU aspect, and seamlessly move the workload between the CPU and GPU, and really be able to optimize around when we need a CPU, when we need a GPU, when we need both, has really given us a technology, again, advantage and flexibility that allows us to maximize the Edge, take advantage of a MEC, take advantage of the data center, maybe a private cloud, public cloud infrastructure, without having to refactor our inferencing platform to move between different inferencing stacks. So, being able to have a consistent set of APIs, a consistent technology integration to the Intel set, has really made it easier for us to deploy solutions where and when customers need them. So, flexibility, choice, and adaptability have been the big benefits of our partnership with Intel.

Christina Cardoza
(On screen: The Future of Transportation slide with illustration of data points over highways)

Well, this has been a great conversation, guys. Unfortunately, we are nearing the end of our time, but before we go, I just want to give a chance to throw it back to each one of you to talk about any short-term or long-term changes we can expect where this is all going in the future, as well as any final key takeaways or thoughts you want to leave our attendees with today. Sameer, I’ll start with you.

Sameer Sharma: Well, I would say, look, I mean, in the last 40 minutes, you heard both from Maurizio and Ken how we are working together to not just predict the future, but to build it together, and I think that to me is the very definition of taking on responsibility on behalf of the society, the community. We’re not here just to create a profitable business. We are here to create solutions that impact everyone’s lives.

So, when I see this partnership, it makes me very, very bullish and hopeful for the future. We have challenges in front of us, and I think in face of those challenges, we can either throw our hands up in despair, or we can say these challenges are our challenges, we own them, and we own solving them, and I think what I see happening in the ecosystem is that desire to come together, to work together. And my final hope is that the work we do will benefit multiple generations. I love it when my son, when I’m dropping him to the school, he’s constantly asking me, dad, you keep talking about these smart intersections, yet here we are waiting, staring at a red line, and the other side, like the other lane is green, and when is that all going to happen? And I love that impatience, I love that desire to see the impact here and now, that should hopefully spur all of us to work together even more faster, at scale, to make this a reality.

Christina Cardoza: Absolutely. Maurizio, are there any final thoughts, key takeaways, or predictions for the future you want to leave us with?

Maurizio Caporali: The important aspect I think for the future is the possibility to define specific service applications thanks to the intelligence on the Edge with the possibility to have lots of data that can be analyzed, and can be transforming a service, that can be a… can be a service for citizens, and all the users, the end users. There are parts of the cities to give some different kinds of sustainability also for energy, but also for their lives.

And Ken, since you kicked off the conversation, I’ll let you wrap it up for us. Any final thoughts, predictions, or anything else you want to leave us with today?

Ken Mills: No, no pressure. I’m just really bullish, as Sameer talked about, all this technology coming together in a meaningful way, at critical mass, across all communities globally so that we can actually get to that point where we’re saying, we’ve solved the problem of pedestrian fatalities, we’ve gotten to Vision Zero, and we’ve reduced the number of lives impacted by these needless intersection traffic incidents that not only affect the person that was hit, but also impacts at least one other person, often cases multiple people, whose lives are changed in a very profound, negative way, and if we can work together to solve that, reduce that, and truly get that to zero, that will be amazing, and I’ll be very excited about that, and I do see that trend in the next five years starting to go the other direction in a meaningful way, and I’m very excited that we get to play a role in that.

Christina Cardoza: Well, with that, I just want to thank you all for joining the webinar and for the insightful conversation, and thank you for our audience for listening in. If you want to learn more about Smart City traffic management, please visit the insight.tech website. We have a ton more podcasts and articles for you to look at on this topic, as well as please visit the Intel, SECO, and EPIC iO websites to see more about what they’re doing in this space. Until next time, I’m Christina Cardoza with insight.tech.

(On screen: Thank you slide with insight.tech website)

(On screen: insight.tech logo and thank you animation)

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

IoT Trends and Technology Predictions to Watch in 2023

It’s January, a fresh calendar page, and time to start gearing up for whatever the new year will bring in IoT. But never fear; face 2023 armed with the expertise of people who spent the last part of 2022 thinking about what might happen around the corner—and even around the corner after that. Analyst firm CCS Insight has been issuing an annual report of technology predictions for the past 16 years, and all the IoT-related forecasts have been indexed, just for subscribers of insight.tech, and made available as a download here.

Martin Garner, COO and Lead Analyst of IoT at CCS Insight; and Bola Rotibi, its Chief of Enterprise Research, join us to review those IoT trends and technology predictions. They’ll talk about the themes to expect in 2023, of course; what might be next for 5G, machine learning, and the metaverse; and even how CCS Insight did on last year’s predictions. Because if the past few years have taught us anything, it’s that you can’t necessarily predict or plan for everything.

What’s driving the ideas or themes in your 2023 predictions?

Martin Garner: What we were hoping for was a period of stability after COVID, so that we could all recover socially, economically, etc., from the pandemic. That didn’t happen. Instead, we got the war in Ukraine; we got political instability in lots of places. We had the rise of energy prices and inflation, and we had supply shorts. It’s been a turbulent year.

For the insight.tech report we pulled out all of the predictions that are in some way relevant for IoT, and it’s quite a broad set, encompassing lots of fields and lots of technologies. That’s because IoT is a stack, from low-level sensors up through connectivity, edge software, cloud computing, and artificial intelligence. It also affects many different types of people—management, operations, engineers, developers, users, consumers, regulators, financiers. Also, both consumer and industrial sides are relevant.

Normally we try to take the long-term view of where everything’s going, but here I’ll actually focus more on the shorter term and coping with current economic conditions. IoT got a big boost during the pandemic because it was really part of how we coped with COVID. And that trend has kept going—at an accelerated rate, in fact. So I think that IoT should still face good market conditions.

“The only point of #IoT is to give you the #data, and the value is in what you do with the data; that is #DigitalTransformation” – Martin Garner, @CCSInsight via @insightdottech

Bola Rotibi: Despite the uncertainty for everyone, I think there’s a moment of opportunity; and when there’s opportunity, there’s gain. One of the things that we looked into was whether people are still going to invest. And post-pandemic I think that there will be a re-collaboration of enterprise strategies that will drive something like 15% growth in IT investment in 2023 and 2024. I think what will shift is people being a bit more nuanced and targeted in their spend. IoT will play a very big part in that as we go forward, and as people look to efficiency savings, connectivity, and hybrid work environments.

How did your previous predictions play out over 2022?

Martin Garner: They basically did play out as we thought. We knew from the pandemic that the cloud providers had become indispensable, and they’ve always done a lot with IoT. IoT, as we all know, is a team sport. And so the cloud providers needed help with system development, application development, supply and support, and systems integration—all those good things.

Cloud providers have also become very involved in the telecom sector with 5G. And now, although various aspects of the cloud providers are under stress and in some cases are letting people go, actually I think the IoT area seems to be going okay. And part of that is because of 5G. I think in the industrial world, in particular, private 5G networks have become real; they’re one of the hot areas at the moment.

The intelligence side is also very interesting—how we use machine learning, and how we make intelligence easier for people in the IoT world to use. I think there’s a bit less focus on IoT for the sake of IoT. The only point of IoT is to give you the data, and the value is in what you do with the data; that is digital transformation. So we sense that the term “IoT” is starting to fade a bit.

What IoT trends or technologies should enterprises focus on in 2023?

Bola Rotibi: One is the hybrid work environment I mentioned before. We’ve come out of the pandemic now, and people are going back into the office. At the same time, people are also still wanting to work remotely. What we’ve learned is that is possible.

But what is going to come out of that? What needs to come out of that? I think we’re going to start seeing a lot more remote support operations—allowing people to feel that they can work remotely or they can work in the office, but the experience will be similar. And that’s both the connectivity experience, as well as being able to collaborate with their colleagues just as if they were actually in the office.

One big side of it, I think, will be enterprise-collaboration tools adding immersive spaces to help replicate the in-office experience. And we’ll also start to see headsets change—connecting with a lot of the collaboration tools and the video-streaming tools—to help bring about that immersive experience.

The other thing is that people are recognizing that employee experience is really important, and important to driving customer experience. So the connectivity between employee experience and customer experience is another one of our predictions for next year, in a demand for software that measures and tracks that link between them.

Martin Garner: I’ll add one very specific trend to watch out for: It’s very easy to think about IoT as just worrying about things, but IoT systems need to properly integrate with the way that people behave in the workplace and in society, as well.

One specific prediction that highlights this idea is that by 2026 there will be road testing of an external system to communicate autonomous vehicles’ intentions. This is all about the fact that there’s a real diversity of road users, and there are lots of different subtle signals about how they give way, how they acknowledge each other’s presence, and so on. Autonomous vehicles just don’t have those signals at the moment, though there are some early tests being done. Society needs this; and it needs to be national, not proprietary. So I think that sort of integration—of IoT with people and the way they do things—that’s going to be a trend to watch.

Where do you think 5G and even 6G adoption is going to go in 2023?

Martin Garner: 6G? Whoa, hang on a minute. It doesn’t quite exist yet, and it’s some years away, really. But 5G is one of the main interest areas in connectivity, especially private 5G networks. And the reason is that 5G is the first generation that’s been designed with industrial usage in mind; recent software releases are bringing low latency, location—all the things that make it an industrial system are being realized.

So I think that by 2025 private 5G-network systems will be repositioned as a platform. The reason is that you can use 5G for various different things—tracking worker safety, autonomous robots, workflow—but it’s a complicated network, and not many people have got all the skills needed to set it up and do it properly. So we expect to see private-network app stores where you can download packaged applications, connectivity options, preconfigured connectors to IoT platforms—and then just get on with what you need to do.

How will this idea of the metaverse play out going forward?

Bola Rotibi: I think the metaverse is going to create a lot of opportunities. Right now we are in its infancy, so a lot of our predictions around it are looking towards the end of the decade. And where it will end up might be different from where it is at this moment.

I do think there is quite an important relationship between the metaverse and digital twins. It’s this environment where you can actually have a digitized representation—the digitization of all data assets in order to give a representation.

By 2028 I think there will be a “blockchain of you” trend that lets developers build viable digital twins of people to support personalized services. Now that’s quite exciting. Because there we’ve got three different technologies: blockchain, digital twins, and the metaverse. So, what does that mean? It means that people could actually have a representation of their health data, of their personal likes and dislikes. And that blockchain means that they would have a certain level of ownership over it: It couldn’t be changed. And then they could trade that information with organizations that may want to use it to do testing against drugs or liabilities or other things. The possibilities are endless.

How will organizations and industries continue to adopt intelligent features?

Martin Garner: As soon as you get into IoT, there is so much data generated that the only way to make really good sense of it—and to get as much as possible out of it—is to use machine learning. Over the next few years we expect that, first of all, the tools will become much more user friendly. Intel®, with OpenVINO and things like that, has done quite a good job of making that easier. We’re also seeing more good examples of prepackaging, so that you can buy systems that have machine learning just built in.

The other bit that needs an awful lot of attention is harmonizing the data so it’s easy to use; there hasn’t been a strong imperative for that yet. We’ve heard stories of manufacturers having multiple generations of sensors that are all different in the way they present data. Maybe 20 years ago it made sense to do it that way, but now it makes no sense. And this needs to be within suppliers themselves—because they need it for their own internal analytics—but also across suppliers, for digital twins, supply chains, etc. But it’s going to be an awful lot of effort to get that right.

Bola Rotibi: From a developer point of view, we’re starting to see AI and ML actually have everyday viability. Whereas before they were very much for the big things—the big calculations, the big modeling—we’re now starting to see accessible AI, accessible ML. The tools have come a long way.

In fact, you can have tools at multiple levels: You still have tools for the data scientists—those who understand the modeling concepts—but now low-code/no-code capabilities have been incorporated. And there’s a broader range of developers—not just professional developers, but those who have got domain experience and want to add a level of programmability to their applications. They’re being brought into the fold, too.

What’s also important is that we’re starting to see small data sets, and people are using their domain experience to make these small changes that don’t require vast compute resources. And I can’t stress the word “accessible” enough, because I think that is what is bringing in a broader church of people who are capable of building those AI and ML applications. These are people who are more task oriented, and I think that is really key, because it’s going to really spearhead adoption. We’ve got an exciting next few years for AI and ML.

Martin Garner: It’s very clear that more and more of the kind of people who need to use AI and ML are not data scientists. They are engineers or operations specialists or process managers or all sorts of people who run things in companies. They need to use it, and it has to be easy for them.

Any final thoughts or key takeaways as we begin this new year?

Bola Rotibi: One thing I would add is sustainability; I think IoT will play a big part in that. It will allow edge solutions to be part of the sustainability story; it will bring together AI and ML capabilities; it will raise an incredible environment for developers—that broader church of developers—and give opportunities to those who are delivering connected solutions.

Martin Garner: I so have one as well. We haven’t talked a lot about cybersecurity, and it is still the single biggest concern of people implementing IoT. As systems are now scaling up to supply chain level, the idea that your whole supply chain might be hacked is honestly terrifying. But the war in Ukraine has meant that there’s been a very large collective response to cybersecurity issues around the world. The question is, how can we as industries maximize the benefit from that collective response? I don’t know the answer just yet, but we’re going to have a think about that. That’s maybe a prediction for next year.

Related Content

To learn more about IoT trends and technologies, read IoT-Related Predictions for 2023 and Beyond and listen to IoT Predictions for 2023 and Beyond: With CCS Insight. For the latest innovations from CCS Insight, follow them on Twitter at @ccsinsight and on LinkedIn.

 

This article was edited by Erin Noble, copy editor.

Lower Your Costs of Video Streaming Over Cellular

As anyone who has encountered echoes, stuttering voices, and out-of-sync movements on a video call knows, live streaming isn’t always what it’s cracked up to be.

What you may not know is that cities are starting to experience similar problems as they connect cameras to everything from highways and railways to town centers and streetlights. For municipalities, the benefits of streaming video are compelling. They can literally see events as they unfold, responding more effectively to emergencies and efficiently organizing crews to deal with traffic jams and crowded public events.

But just as it does for consumers, video streaming can have significant limitations. Coverage is often uneven, and video is extremely data-intensive, creating a heavy load on city cellular networks. That can lead to lags and drops in service at critical times—not to mention sky-high bills.

New technology can overcome many of these obstacles, delivering video more efficiently across a broader area with almost no latency. With 24/7 video streaming over cellular you can achieve reliability, near-zero latency, and high video quality. As a result, cities can obtain a more accurate picture of operations and events, while also securing their data and saving taxpayers money.

Near-Zero Latency Lowers Operational Costs

Smart cities have a twofold problem with video streaming. In densely populated urban spaces—especially if a big sports game or other live event is happening—cellular networks can quickly become clogged. In rural areas, cellular service may not be available, or the signal may not be strong enough for video transmission. That can leave cities with large swaths of the community they can’t access in case of an emergency or disaster, as well as blind spots along streets and highways.

“Live streaming video across cellular networks is not only difficult, it can be extremely expensive,” says Fredrik Wallberg, Chief Marketing Officer at Digital Barriers, a provider of video solutions for municipalities and businesses. “To pull reliable video, you need to push down the data usage and the bandwidth requirements.”

After years of working on video technologies for government organizations in the U.S. and Europe, Digital Barriers leveraged its expertise to do just that. The EdgeVis video router, powered by Intel® processors, compresses video data, resolving congestion and improving reliability on fixed networks for near real-time data transmission, even in remote rural areas.

“Cities can use it for streaming over 3G, 4G, or 5G from anywhere. Independent testing by telecom companies has found there is anywhere from 50% to 90% savings on cellular costs,” Wallberg says.

Managers can also use the router to regulate their departments’ video camera use, ensuring the city doesn’t exceed its data plan limits and incur overage fees.

Obtaining affordable and reliable coverage could encourage cities to expand #VideoStreaming. One of the most vital #UseCases is emergency response. @DigitalBarriers via @insightdottech

Real-Time Video for Emergencies and Events

Obtaining affordable and reliable coverage could encourage cities to expand video streaming. One of the most vital use cases is emergency response. With accurate, up-to-the-minute information about floods, windstorms, earthquakes, and other hazardous weather events, managers can coordinate teams and deploy resources more effectively. And in the case of injuries—from traffic incidents to downed electric cables—real-time video could help first responders save lives.

“You can stream a whole scenario, zoom in or tilt cameras remotely, and send live video to hospital experts, who can give advice to emergency responders as they treat patients en route or at the scene. The hospital can prepare rooms and equipment in advance,” Wallberg says.

Cities can also use video to monitor large events, such as football games and concerts, where thousands of people may gather hours in advance, jamming streets and sidewalks. Transportation departments can monitor traffic along roads and highways, rerouting vehicles as needed to improve safety and efficiency.

“Whether it’s a golf tournament or a parade, you have a force multiplier of eyes making sure the event runs smoothly,” Wallberg says.

Improving Cybersecurity in Video Camera Systems

By boosting physical security with video monitoring, cities often run into another problem: cybersecurity. Many video camera systems lack adequate protections, and both cities and companies have been plagued by device hacks. To prevent misuse, the EdgeVis router contains strong firewalls and advanced end-to-end encryption that operates under the same standard that government agencies use to guard top-secret information.

Video monitoring also raises concerns about personal and data privacy. Digital Barriers designs its routers to adhere to any and all privacy policies and regulations.

Gaining Insights from Edge Analytics Systems

One reason cities deploy video cameras is to better understand how their citizens use infrastructure and how municipal teams deliver services. The EdgeVis system can link any camera to any VMS or analytics platform, which are rapidly expanding as cities use them to gain insights that can improve operations.

For example, if video feed shows bottlenecks at one intersection, data analytics can predict which other corners are likely to be affected, allowing managers to adjust traffic lights or reroute vehicles as necessary. Cities can also use video data for other types of use cases such as directing drivers to the nearest available parking space. Relieving traffic congestion not only improves quality of life but city air quality as well.

“Cities will continue to add more analytic capabilities at the edge,” Wallberg predicts. “The secret sauce that makes them work is delivering reliable live streaming video over cellular networks.”

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

QSRs: Want a Side of Vision AI with That?

When you think of AI, you don’t typically think of restaurants. But food service was one of the industries most disrupted by the pandemic and its aftermath. The rise of third-party delivery services has also had an impact on the situation, where a mismanaged meal delivery can damage the reputation of the restaurant as well as that of the driver. And it turns out that AI has a lot to offer this new culinary landscape.

Our guest is Hauke Feddersen, VP of Operations at the smart software automation provider PreciTaste. He’ll talk about the challenges of QSR kitchens that are pressure cookers at the best of times, and how edge technology and vision AI can create efficiencies there that lead to fresher food reaching customers, faster. Because, as Feddersen reminds us, with QSRs “it’s not the food that is fast, it’s service that is fast.”

What’s driving the current demand for AI in the food service industry?

I’m a firm believer in the fact that there can only be a solution if there’s a problem. And this industry is facing problems. There is a strong, unmet demand for labor, and a lot of labor churn. And the churn brings with it the fact that a lot of established best practices, a lot of established know-how, tends to get lost.

The demand patterns have also shifted since the beginning of the pandemic, which makes it very difficult for operators in the kitchen because they have such limited data. Their only window into the reality out in the restaurant is the KDS, the kitchen display system. The KDS tells them what has been ordered in the past, but they don’t have a system that forecasts what will happen next.

So we are turning restaurants into data-driven operations. AI is incredibly good at solving an equation with almost unlimited variables—whether it’s traffic patterns, historical sales, the sales of the last hour, the sales of the last days—to predict demand better than any human could. This helps the kitchen crew by taking the cognitive load off them, and making sure that the individual station basically just has to do what’s on the screen.

And there is so much more delivery now. All of a sudden the customer that used to be standing in front of you is somewhere else entirely, waiting for a delivery. That puts huge implications on order accuracy. “You forgot the Happy Meal for my kids!” “My apologies. And here’s an extra Happy Meal toy for you.” Everybody is happy. But you can’t do that if the customer is 10 miles away and the food has just been delivered. You have to get it right the first time.

In 2020 PreciTaste launched an auto-accuracy verification tool to go with its QSR brain platform. Cameras mounted to the ceiling see what is happening in the restaurant and can see what is being added to that bag, so they can see if the Happy Meal toy has been put into the bag or if it is missing, and they can see that the correct bag is handed out the window to the correct customer or delivery driver.

“We are turning restaurants into #data-driven operations. #AI is incredibly good at solving an equation with almost unlimited variables” – Hauke Fedderson, @PreciTaste via @insightdottech

How else are third-party delivery services affecting restaurant operations?

This situation changes how the customer is perceived by the restaurant. The delivery customer is not necessarily your customer, and you don’t know that person; they are anonymized by the platform. When you know your customer and have a direct interaction with them, you own the entire customer experience from order to delivery. Now, all of a sudden, restaurants are just one part in the middle of the transaction. But when things go wrong, the feedback to them is still prompt, and it’s expensive. Refunds for inaccurate orders to platforms like Uber Eats are very, very severe.

The kitchen is already a stressful environment, and it’s sometimes close to magic that teams are able to churn out the amount of meals they do in one hour, and have them all delivered and all reach the right customer. So the best thing that we can do to help that is to reduce that stress, reduce the cognitive load, make sure that the processes flow and that inventory is available at all times so that this very well-oiled machine doesn’t have to stop.

Talk about the investments in technology needed to implement these AI solutions.

This is a nickel-and-dime business, and there is not a lot of money to be wasted, therefore the investments need to be targeted and solution oriented. The KPI improvements must be real for the customers.

We are strong believers in edge AI. Everything we do runs on small form factor computers like the Intel® NUC. And one of the main reasons is price. We intend to have the fully fledged solution installed for between $2,000 and $5,000 max, including all the cameras, all the edge devices, and all the networking kit that is required.

And if security cameras are already installed that are TSP compliant—meaning IP cameras—then we absolutely love using existing video streams. Vision AI does not need perfect imagery; very few pixels are actually sufficient to run very sophisticated models. Our model is: What the human eye can see, we can teach the computer to see. As soon as that data is digitalized, we upload it into the brain part of our edge AI installation, and that then makes predictions based on what it has seen.

So the role of edge AI in this is very important for multiple reasons. First: cost. Cloud-AI platforms tend to be very, very expensive over time. Second: the seamless integration and the low-latency inference that we get from these devices independent of internet. Even if you cut the internet, our solution will continue to run.

And the third, very important aspect: managing PII, personal identifiable information. We mount the edge device that captures data from an ordinary security camera only a few feet away from the camera. The vision data, the PII part, is thrown away immediately, and the only thing left is: There are six people waiting to order; there are 12 cars in the drive-through and two of them have ordered already.

Can you share any PreciTaste use cases?

My favorite one is Chipotle. Chipotle runs an amazing operation—it’s scratch kitchen at its finest. Raw ingredients being cut, being spiced, being marinated, being cooked in the restaurant itself; they start with raw avocados and raw tomatoes in the morning to make their delicious guacamole. Our solution is about inventory sensing at the front-of-house makeline, as well as the digital makeline in the back of house that is used for delivery orders. It is always sensing how much inventory is present, how fast the inventory is depleting, and then advising the crew on what to cook next, and when.

For example, the chicken process, just because it’s so artisanal, so scratch kitchen, takes them 25 minutes from the instruction to the crew member, “Please make chicken now,” to the chicken hitting the front-of-house makeline. So you have to know 25 minutes in advance when you need to restock. And of course the demand patterns vary throughout the day.

So, at lunchtime a full pan still means you have to cook more right away. An hour later, half a pan means you can leave it for another 10, 15 minutes—because it’s still great food—and please cook something else first; you’ll only need to cook more chicken 20 minutes from now. The AI is very, very good at predicting what will happen, and helping the crew get to the point where they never stock out, but they can serve the freshest food possible.

At the beginning of each project we always have a passive phase, in which we capture how well the restaurant is performing without our help. And then, after we switch on our software suite, we compare that to the benchmark. That’s actually our biggest selling argument, just saying, “This was before, and this is now.” It’s always worked.

What is the value of working with Intel and its technology?

The Intel® NUC 12th Generation is a powerhouse in the tiniest imaginable form factor, and it is extremely reliable. We can mount them anywhere, even if the restaurant doesn’t have a server closet or a proper office. I just really like working with those devices. Same goes for the Intel® RealSense.

OpenVINO has also helped us. With OpenVINO we can port our models to run on the CPU or the integrated GPU. This unlocks an abundance of devices that we can potentially use, which has been especially important in the last two years during the supply chain crisis for digital components.

Where else do you see AI going within the food service industry?

I think we’ll see a lot more implementation—in front of house that’s visible to the customer, as well as back of house to optimize processes. It’s about producing more with less: more food with a smaller crew, or with the same size crew that now does 20% more.

And we’ll see the technology being specialized, so that those third-party solutions can deliver multiple different food items from one kitchen—you can have Mexican food paired with Italian food, paired with sushi, paired with wings—all delivered by one driver. Then everybody around the table can enjoy the food they want, and they won’t be limited to either all getting the same thing or having to pay three delivery fees.

I also think that our industry will get a lot more precise in its predictions in order to eliminate the stigma that QSR food sits around all day, which some people still associate with these kinds of restaurants. It will be fresher food, and the food can be more interesting if it doesn’t have to be optimized for shelf life. Personally, as a consumer, I’m very excited about that part as well.

Are there any final thoughts you’d like to leave us with?

The thing I like the best is that this is not science fiction. This is technology that is out there today, and it’s improving the lives of customers already, unbeknownst to them. What I would love to see in the future is a sign or a badge outside a restaurant or in your favorite delivery app that says, “Hey, this restaurants optimizes quality utilizing state-of-the-art vision-AI technology. So, don’t hesitate, don’t worry—you’ll get the best quality from this location every time, because it’s managed by a system that is entirely designed to do just that.”

Related Content

To learn more about AI in the food service industry, listen to The Recipe for AI in the Food Industry: With PreciTaste, and read When the Customer Experience Feels Deeply Human. For the latest innovations from PreciTaste, follow them on LinkedIn.

 

This article was edited by Erin Noble, copy editor.