Smart Stores Are on the Path to Net Zero

For businesses with large estates and many physical assets, managing energy use is a big deal. Letting assets go unrepaired, running heat when no one is in the building, and having lighting on a strict, unchangeable schedule not only hurts the bottom line—it also increases their environmental footprint.

Until recently, the best solutions for tracking and managing energy use produced reports that lagged by at least a day. To be fair, they looked pretty good compared with the next-best tool: paper forms.

But now, IIoT technology is evolving to let businesses with large properties easily connect to their assets to monitor—and even control—their energy expenditure. This helps them reduce waste and improve efficiency, with an eye toward eventually becoming exporters of energy.

#IIoT #technology is evolving to let businesses with large properties easily connect to their assets to monitor—and even control—their #energy expenditure. @harksys via @insightdottech

A Roundabout Path to Sustainable Buildings

The journey to sustainability is not always a straight line. Take Hark Systems, a provider of energy analytics and IIoT solutions. The company was originally founded as an IoT platform to monitor environmental conditions like temperature and humidity for pharmaceutical companies. “But then we were asked to do all sorts of crazy things,” says Jordan Appleson, Hark’s founder and CEO. “We were asked, ‘Can you monitor radiation in uranium mines? Can you monitor air quality?’”

Because the platform they’d built could work with almost any kind of asset or sensor, Hark was able to expand beyond pharma into other industries.

Soon, grocery stores started approaching the company about their energy challenges. Supermarkets are packed with energy-hungry assets like refrigerators, generators, bakery ovens, and heating systems. “They were spending £10 or £12 million a year in additional fees when energy prices changed,” says Appleson. “They wanted a way to monitor and react to that in real time.” But Hark discovered these retailers weren’t just worried about cost—they were also concerned about their environmental footprint.

Metallica Brings the Proof

The Hark Platform uses edge gateways powered by Intel®, running Hark software, and then connects the devices to energy meters, building management systems, and other physical assets (Figure 1).

The Hark Gateway runs on the edge and physically connects to assets.
Figure 1. The Hark Gateway runs on the edge and physically connects to assets. (Source: Hark Systems)

“Everything that we do today has been deployed on Intel at the edge in one capacity or another,” says Appleson. “You’ve got all these devices that speak all these different protocols, and you’re always going to need edge computing to cost-effectively bridge that gap. That’s what Intel-powered gateways do for us.”

Hark’s solution uses a machine learning model to forecast energy usage or suggest ideal actions based on historic information. Other data, such as information from occupancy sensors and weather forecasts, can come into play as well. For example, rather than setting a schedule where the lights go on at 9 a.m. and off at 5 p.m., a retail store can automatically lower the lights when there are few people in the store and turn them back on as occupancy rises.

Customers are often skeptical about how quickly all this can be set up. “I like to say, ‘Give us half an hour and a gateway, and we’ll get you up and running,’” Appleson says. He proved this to one customer by almost instantly controlling the lights in a massive building to flash on and off to the beat of a Metallica song.

Smart Stores, Smaller Footprint

Sainsbury’s is a big company with a big goal: It’s the second-largest supermarket chain in the UK, and aims to reach Net Zero by 2040—meaning the total amount of energy it uses is zeroed out by the amount of renewable energy it generates on-site.

In 2018, Sainsbury’s contacted Hark looking for a solution that would help them track, monitor, and control the energy their assets were using. An initial monitoring session revealed the assets that were guilty of consuming the most energy—including a broken piece of equipment that was drawing much more power than it should have. Sainsbury’s signed on to have the company implement the Hark Platform solution on 20,000 assets in 40 asset groups, including lighting and refrigeration.

In the Sainsbury’s implementation, Hark Gateways retrieve more than 2 million readings per day in each store, and stream the data in real time to a cloud-based dashboard. The platform can detect anomalies and send out alerts of potential issues with mission-critical assets; in fact, it’s identified problems that have saved 4.5% of lighting costs so far.

The solution can control certain asset groups via the edge gateway. “When the store opening times change, our system automatically receives that information and deploys a new automated schedule to the edge based on preset profiles,” says Appleson. “And when we have energy price spikes in the winter, within 60 seconds of an automated notification coming into our system from the utility provider, our system will orchestrate a profile change to reduce the load in the building.”

Sainsbury’s also needed to increase visibility into its assets. Before implementing the Hark solution, Sainsbury’s asset groups and industrial depots were “completely disparate,” says Appleson. Now the retailer can monitor and control everything from a central location.

Smart Buildings Will Have Power to Spare

Appleson suggests that in the future, businesses like Sainsbury’s will be able to become microgrids. Being a truly sustainable building means being able to generate your own power and sell it back to the grid, effectively getting carbon-free power.

Much of the technology needed for this to happen—such as solar panels, energy storage units, and platforms like Hark that connect and monitor these things—already exists. The electric network in the UK isn’t yet set up to track and bill for energy in this way—but when it is, Hark will be ready to lead the way to Net Zero for environmentally conscious businesses.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

AI at the Edge Spurs New Industrial Opportunities

The world is moving fast, and manufacturers must be able to keep up with the pace of change. Luckily, with technologies like AI, machine learning, computer vision, and edge computing, solution developers have the tools to help them do so.

And we are already seeing major results—both inside and outside the factory.

Product Defect Detection

For instance, smart manufacturers have started to deploy AI at the edge on the shop floor to reduce the risk of unplanned shutdowns and production issues. By automating the process with AI platforms like the Intel® OpenVINO Toolkit, image analysis can be performed directly on smart factory equipment, and workers can be quickly notified of any issues happening. This reduces manual work, which is prone to errors, and stops problems before they snowball.

And with tools like the Hitachi Industrial Edge Computer CE series Embedded AI model, which leverages OpenVINO, adding these advanced capabilities to the factory is made simple. For example, Hitachi can detect product defects and equipment issues from multiple production lines and devices simultaneously—speeding up the time it takes to alert operators and address the problem.

Supply Chain Management Gets Streamlined

OpenVINO is also being used outside the factory to tackle supply chain issues.

With just-in-time manufacturing, staying ahead of supply and demand was already on manufacturers’ minds well before 2020, but when the pandemic hit, a lot of pressure was put on their digital transformation timelines. It is now clear that traditional approaches no longer can keep up with new supply chain demands.

People are shopping in bulk more than ever, and home improvement projects have skyrocketed. As a result, consumers notice many shelves or supplies are empty or out of stock.

Smart #manufacturers have started to deploy #AI at the #edge on the shop floor to reduce the risk of unplanned shutdowns and production issues. @IntelIoT via @insightdottech

While manufacturers cannot consistently control the availability of raw materials, one thing they can handle is how their goods are delivered to stores. Instead of sending out goods as soon as possible, they can meet demands and save money by waiting until transport vehicles are at 100% capacity.

To do this, they need a combination of computer vision and AI, which allows them to constantly monitor shipping containers, fill vehicles, and alert managers with operation status. With advanced AI algorithms and edge computing, manufacturers can get dock occupancy status, wrong place detection, and even deploy automated robots to handle, load, and unload freights.

To ensure these AI applications don’t compromise on the smart factory’s performance, power, and cost, some manufacturers have turned to Avnet Embedded, a leader in embedded compute and software solutions. With its MSC C6C-TLU module, based on the 11th Gen Intel® Core processors and paired with OpenVINO, applications can withstand rugged environments, meet performance demands, and process data in real time.

Automating Human Response

And, of course, manufacturers have to worry about the human factor when it comes to product and equipment inspection. For instance, factory workers typically have their own way of doing things, which can be problematic when you’re trying to keep production and quality consistent.

With computer vision, AI, and machine learning, manufacturers can now pair human behavior analysis to its assembly line machine metrics to understand the performance of each operator.

Vecow enables this capability with the Vecow Human Behavior Analysis solution, which includes the VHub AI Developer software platform. With the solution, developers can create AI models and applications with computer vision capabilities. Those apps can be connected to a factory’s existing cameras to collect and analyze data at the edge and detect inconsistencies. Vecow leverages Intel® Core i5 and i7 processors for computing power, and OpenVINO for AI model generation.

Deploying AI to Robots

Finally, robots equipped with AI and computer vision are also handling quality control these days. For instance, when robotic arc welders are being used in high production applications such as the automotive industry, it can be difficult for a manual inspector to visualize and catch any potential defects.

But when you add solutions like the Edge Arc Welding Defect Detection from ADLINK, a global manufacturer of edge computing solutions, AI and computer vision can be added to the process. Powered by ADLINK Edge IoT software, OpenVINO, Intel Core processors, and Intel® Movidius Myriad X VPUs, robotic arc welders can capture, process, analyze, and act on data before issues become bigger problems.

These are just a small set of possibilities AI and high-performance edge computing can offer smart manufacturers. For those struggling to deploy some of these AI capabilities and boost their industrial applications, check out the Intel Edge AI Certification Program or take the 30-Day Dev Challenge.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

5G is Here: What Does it Mean for the Factory?

Related content

To learn more about 5G and the smart factory read Moving the Needle to Industry 4.0 with 5G and the Edge and listen to The 5G Factory of the Future with Capgemini.

Transcript

Corporate Participants

Christina Cardoza
Associate Editorial Director, insight.tech

Philippe Ravix
Global Digital Manufacturing Solution Architect, Capgemini

Sander Rotmensen
Director of Industrial Wireless Communication Products, Siemens

Martin Garner
Chief Operating Officer, CCS Insight

Presentation

Christina Cardoza: Hello and welcome to the webinar, “5G is Here: What Does it Mean for the Factory?” I’m your moderator, Christina Cardoza, Associate Editorial Director at insight.tech.

(On screen: intro slide introducing the webinar topic and panelists)

And here to talk more about this topic, we have a panel of expert guests from Siemens, Capgemini, and CCS Insight.

So before we jump into our conversation, let’s get to know our guests. I’ll start with you, Philippe. Tell us a little bit more about yourself and your role at Capgemini.

Philippe Ravix: Happy to be there. I’m Philippe Ravix. I’m based in France. I’m part of Capgemini in the Digital Manufacturing offering at the group level. I’m a solution architect or CTO. I work in the manufacturing IT/OT landscape for more than 10 years, supporting and– supporting clients in their digital transformation journey. My role in Capgemini is to support the global business lines, the business lines, in fact, during the sales, pre-sales, or first phase of the project to shape, to define the architecture, and also to put our best partners in the loop to support the digital transformation of our clients. I have also another role in the alliance, with Intel Alliance. I’m leading the Intel Alliance through the smart services streams. Meaning globally IoT, IoT instance of classical IoT, and also industrial IoT landscape.

Christina Cardoza: Great to have you. Sander, I’ll turn to you next.

Sander Rotmensen: Okay, thank you. So my name is Sander Rotmensen. I’m with Siemens Digital Industries. I’m responsible for our industrial wireless communications portfolio, which includes industrial wireless LAN, 2G, 3G, 4G, WiMAX, and since last year as well, our 5G and devices, and we’re currently working on creating our full 5G ecosystem, which consists of a private 5G network, and I’m looking forward to discuss what 5G is here for and what it does mean to the factory.

Christina Cardoza: Great, looking forward to hear that as well. And last but not least, our good friend Martin. Thanks for joining the webinar.

Martin Garner: Yes, thank you, Christina. So I’m Martin Garner. I work for CCS Insight. We’re a medium-sized firm of industry analysts based in London and in the US, and I lead the work we do in industrial IoT, the research that we do, and I’m also the COO of the firm.

Christina Cardoza: Great. Thanks, everyone, for being here today. Let’s take a quick look at our agenda.

(On screen: slide outlining the webinar’s agenda)

Our guests are going to go through the promise of 5G. What this means in industrial and manufacturing; key 5G considerations when you’re looking to deploy these networks; how it’s being used already in the factory and where it’s going; how it compares to private 5G, and when would you use private versus public 5G; and then looking at it compared to other cellular technologies like Wi-Fi 6, and we’ll also look towards the future of where 5G is going.

So let’s get started.

(On screen: The promise of 5G and illustration of buildings connected to a network)

There has been a lot of hype around 5G recently, and there’s this expectation that it’s going to transform the industrial space, leaving many manufacturers wondering how they can take advantage of this technology.

So Martin, I want to turn this first question to you. What has been driving this interest and rapid adoption towards 5G?

Martin Garner: Well, I think– Thank you, Christina. I think it’s clear that if we have good high-bandwidth wireless communications across a wide space, like a factory floor or something, if we have that, and we can trust it, and it has industrial features, then we can unlock quite a lot of flexibility for production systems on all kinds of factory sites. We can also do much more easily things like autonomous robots, autonomous vehicles. It would change the logistics on-site quite a lot, and 5G is really interesting because it’s the first wireless networking technology that’s designed to do that job.

Now, the good thing about private 5G is that you can do it yourself, and you don’t have to work with a telco, and you have full control over all the parameters and the security and the integration and so on. So I think those are the things that are driving it. I think fair to say, it’s early days, and maybe expectations are running a little ahead of reality at the moment.

Christina Cardoza: I love that you mentioned that 5G comes with more promises than previous generations of mobile networks, and I mentioned how the hype around 5G has been great lately, and with every G it seems like there has been this hype around it. So Philippe, can you talk a little bit about how 5G compares to previous generations of mobile networks, and why 5G is having such more promise to the manufacturing industry than before?

Philippe Ravix: Thank you, Christina. So, compared to the previous generation of cellular technology, 5G for sure for manufacturing is really a promise. First, because the bandwidth, the speed, the latency will increase– will drastically increase, in fact, in terms of performance, in fact. And it’s the first network today with the promise of 5G that can support the different use cases in manufacturing, in fact, so there is no comparison. But in manufacturing today, there is not so much I would say cellular connectivity to support the industrial or manufacturing process. We are more looking– we are more speaking and discussing about Wi-Fi technology for global connectivity, industrial connectivity, for more machine PLC connectivity, so meaning wire connectivity, or we’d say classical IoT connectivity like BLE for more IoT sensors, in fact. So, meaning that the adoption of cellular, even 4G or LTE, are not so deployed in manufacturing. But with 5G we have, I will say as Martin explained, one or all-in-one, in fact, with the slicing. So you can merge the, I would say, low latency requirement or features, you can also use the [LT1], so the low volume. You can also have classical 4G features. We can have high bandwidth. So you can have in one network everything. But now the point is that how and when the manufacturing will adopt 5G to be deployed in the plant, that it’s not the case today, and also, what about the new Wi-Fi 6 that is more or less the same features and the performance of the 5G?

(On screen: The promise of 5G and illustration of buildings connected to a network)

Christina Cardoza: Great points. We’ll be getting to the private 5G networks, like Martin mentioned, a little bit later in the conversation, as well as what Wi-Fi 6 means in all of this. But let’s first look a little bit deeper into what 5G means for the factory.

(On screen: The factory of the future with illustration of robotic arms on the assembly line)

So Sander, I want to start with you in this topic. What is the significance of 5G adoption, would you say, in the manufacturing space? And can you talk about– talk a little bit more about some of the benefits Philippe was just talking about? What are the opportunities that this technology really presents?

Sander Rotmensen: Yes, I think I can, Christina. So first of all, there is a reason why we, for example, in 2016, joined the 3GPP to ensure that requirements from industrial applications and use cases were part of the standardization. It was not just Siemens, there were other companies as well, but we wanted to make sure to make our voice heard. And if you are looking for a solution in a factory, or for a challenge everywhere in the world, you want to discuss this with people, and you want to make sure it’s part of the standardization as soon as possible. And some of those points you will be seeing in the upcoming releases of 5G. So 5G comes in different releases. Currently, we work with our cell phones, which are working on Release 15 most of the time. I think this year, we will see some Release 16 coming ahead. But starting with Release 16, it starts getting interesting for the industry because that is where those promises from Philippe, and as well Martin, which they mentioned, are going to be part of the standard and hopefully also products.

So when we talk, for example, about features in Release 16 and 17, we talk about your URLLC (Ultra-Reliable Low Latency Communication). So that will allow us to communicate roundtrip delay times from less than 10 milliseconds, and that’s with a reliability of 99.999%. So if we combine those and really bring this to life in a factory setting, we are able to maybe take some intelligence out of moving equipment, which makes them lighter, simpler, and then they have a central control so that they could work together for example, and this is where we see that coming together in the future. When we look at process automation, we talk about massive machine-type communication, where we see possibilities with power-saving communication. So having a wireless transmitter maybe operated on a battery completely making it wire-free in the future, and I think these are just two examples of the opportunities. There are much more and yes, we’ll probably cover some more later on in this session.

Christina Cardoza: Now, you mentioned some standardization that needs to happen, as well as different releases or flavors of 5G coming out. So I’m curious, Sander, if you can lay the landscape out today a little bit. Where are we actually with being able to take advantage of all these 5G benefits it has to offer, all the features out there for the manufacturing industry, and what’s still to come?

Sander Rotmensen: Oh, there’s so much to come, but yes, that’s a good point. So currently we are working– if you look at the standardization, we are just at the forefront of getting Release 17 released, but if you look at the market currently, what is available, it’s just Release 15 technology. So typically, we are three years behind. So if a standard is released– for example, Release 15 was released in December 2018. The first products hit the market in a consumer space I would say in 2020, whereas in industry, we’re always a little bit behind because we need to ensure that the products work because imagine a 5G device stopping to work and your nuclear power plant just stops. That will be devastating, and therefore we need to have proven technology. We want to make sure it works. So we do system testing, not only product testing, and those things are very important to us and to our customers, and the users of 5G in industrial settings as well, and this is where we are looking at right now.

So July 2020, Release 16 was released. You can expect, I think, a product somewhere in spring 2023, mid-2023, and then also what we see currently there’s a steep uptake in private networking. So private 5G networks, you’ll see them coming now when we talk about standalone networks. They are becoming more visible. People start using them as well, but it’s still in a trial phase. There are not really many companies really putting this out there and putting this in the fields. I think for that we still need to wait maybe around two years to see that really becoming a mainstream wireless technology.

Christina Cardoza: And Philippe, you’re working with a lot of customers to deploy these technologies. So I’m wondering what trends you’re seeing currently and how you’re seeing the manufacturing space adopt 5G today.

Philippe Ravix: Yes, and I fully agree, in fact. Yes, we consider that we are at the beginning of the journey. Meaning that we don’t see any clients with a 5G deployment at scale in the block. So, if we consider different manufacturing like the process, discrete, or assembly, or if you have another segmentation like the factories and for the first one that everything is in the plant, in fact, and after the distributed assets that we see in renewables, chemistry plants, and in ports, and so on.

So, for the first one, for the factory, we are at the beginning of the journey of the story, with small proof of concept. In fact, our clients want to evaluate the business value and the key 5G use cases they will be able to support in the future. And also we want to understand, for sure, the debt and the total investment they will have to do for a 5G deployment.

For the second segment, the distributed asset, the market is more in advance. We have many pilots for customers in Europe and US, for example, for smart airports, smart ports, or mining companies. We are more in the pilot phase with 5G deployment, but for testing. Not full deployment. So this is where we see where the market is today, and after, for sure, there is also geo-segmentation. Meaning the majority in Asia is much more better than we have in Europe and in North America. So we have the segmentation from the industry, but also I would say the country or geo-segmentation that we should take into consideration. But for sure, what to expect for Capgemini as the system integrator is to have a mature market not before 2024/2025.

Christina Cardoza: (On screen: Key 5G considerations with illustration of person looking at a 5G network abstract)

Great, and since we are still at the beginning of this, and this is a new technology for everybody, I want to turn back to you, Martin, and can you talk about some of the challenges that you’re seeing when it comes to adopting 5G?

Martin Garner: Following on from what Sander said, that a couple of years after the standardization, that’s when we’ll see products that we know definitely work well enough to go into factory environments. At that point, for the factory, it’s still quite a new technology, and it’s a complicated technology. So the customers don’t have the expertise and the experience they need to use it properly. They think it will do certain things, but they don’t really know yet.

And the other thing I think is really interesting here is that we need to think of it as much more than just connectivity. It’s not just a substitute for an Ethernet cable. What we’re really offering here is highly connected edge computing, and so that’s a whole stack of software and will tend to bring potentially quite a lot of change. So we need to make sure, first of all, that the workers are happy with the technology in the factory. And we know about conspiracy theorists in 5G and so on. We can expect a lot of change in working practices and processes, in optimization, and so on, and so as Philippe said, lots of proof of concepts, lots of security vetting, and that might take 12, 18 months, something like that. And then when you’re ready to go, you can’t just go next week. You have to wait until you have a planned shutdown, and that may be months away, and so you do all of your work, and then you still have months more delay before you’re allowed to start setting it up in the factory, because the factory is working all the time. And so, there are some big, long cycles built into the way that people will adopt 5G here.

Christina Cardoza: Yes, that’s a great point that you can’t shut down the factory to start doing all this stuff. You need to really have a plan for this. And speaking of connectivity, 5G brings up the conversation or the debate, should we be wired, or should we be using wireless connectivity? And Sander, I know we’ve had this conversation before. How do you decide between wired and wireless connectivity, or can you talk a little bit about what the manufacturing floor looks like today?

Sander Rotmensen: So let me first start with my home office. Everything is wired, except for my headset, maybe, because I want to move around. That’s actually where we get to the factory, of course, because in the factory, we have lots of moving equipment, and there a cable is just a struggle, and with current battery lifetimes, AGVs can be operated for a long time, on one charge. We have now augmented workers where workers are being informed during the day what is happening. For example, they wear smartwatches. They could potentially have a smartphone with them, or maybe even a tablet for service purposes, and yes, we will see more and more wireless connectivity coming to the factory, but I will still say if you can use a cable because there is no constraint, use a cable, because you need to keep that air clean of things you don’t need there because spectrum is limited. It’s like the oil of 5G I would almost say. We need to ensure that we use it as little as possible, so we can have the mission-critical applications communicating over a wireless connection in a way they are able to meet the quality of service needs.

Christina Cardoza: Great and in the beginning, we talked about some of the things the industry needs to do to prepare for 5G and make sure manufacturing organizations are ready. But what exactly do these manufacturing operators need to do to prepare for 5G fully? How do you know if you’re ready for 5G, and are there any prerequisites that you need to ensure you have in place before you get there?

Philippe Ravix: Okay, good question. So, are they ready for 5G? Not really. So, the first thing is that 5G is, for sure… manufacturers are convinced that with 5G, they will be able to build intelligent factories, smart factories, and truly take advantage of technologies such as automation, artificial intelligence, augmented reality, and so on. So, they are convinced of it, in fact. Now, the point is that we are the first, if they want it– meaning that if they consider we’re already first, we consider that clients, manufacturers, as to have initiated or even started the digital manufacturing transformation journey. So it’s not only Industry 4.0. It’s much more broader. It’s what we call at Capgemini intelligent factories, or everything, in fact, and to have a clear roadmap and use cases and business value for each use case. So, this is key in fact, because 5G is a technology and they need to understand why, when, and what is the business value because of the cost, because of everything.

The second point also is something that we see is the agility in the manufacturing. So, today for the last, I would say, 40 years, there was a lot of optimization laying, and so on, in the process, but most of the time the process is what we call a 1D process. So, many machines are fixed in the ground and you can optimize machine one to machine two to machine three. With 5G– with mobility, mobility meaning even PLC and machine mobility, you can start to see a process as multi-dimensional, so meaning 3D optimization that you can start, so meaning that to be rigid, the client should also engage the discussion to optimize the process not only in this dimension. In fact, not only, or to arrange my line not only in 1D, but in 2D, even with [Z] with the 3D process optimization, so also it’s a key driver.

So this is the… I’m ready for the prerequisites. First, the point is that even if everybody is convinced of the 5G and the usage of 5G, meaning that the value of the low latency, critical use case at a high-level of automation for robots, cobots, we need to be sure that the ecosystem is ready. So meaning that we will have all the end-to-end chains, meaning from the device to the infrastructure, to the network, to the frequency, to the skills also, because it’s a new skill. It’s not Wi-Fi technologies, cellular technology at the port level in the enterprise. So everybody, every device, and all the supply chains should be ready for this. So it’s really something key for the manufacturer to be sure that, as most of the time in manufacturing, it’s critical, we need to be sure that we will have the same level of performance reliability, and SLA than the existing protocols.

Christina Cardoza: So it sounds like there’s a lot that goes into becoming 5G ready, and that it may be a while before 5G becomes a reality or mainstream in this space. So Martin, I’m curious how else you’re seeing the industry prepare, and once we do get to a place, if we ever get to that place, where we are 5G ready, what happens next? How can you prepare and then plan and implement 5G?

Martin Garner: Well, we touched on some of this with the scheduling that’s needed. I mentioned, but didn’t dig into, the need for security vetting. I’ve heard people say we designed this lovely IoT or 5G system, and then the security guys got hold of it, and they needed 12 months to assess it. So we need– that’s a hurdle we need, or a step we need to go through.

I think the other thing, and this is what we’re starting to get from the proofs of concept, is a really good understanding of where are the good use cases that we can focus on, and what’s going to drive the business case, and what does that start to look like? Because if you don’t understand the business case, you’re not going to do a big rollout yet. I think some of these are starting to be clear, and robots and AGVs we’ve already heard about. I think analyzing video streams and machine learning is another really good one, and there are lots of examples there. Another one I really like is if you’ve got a product that takes a lot of software, how do you put that software into the product on the production line, and 5G turns out to be a really good answer for that. So I think there are lots of angles on this, lots of areas we need to get right, before we can start thinking about a bigger implementation.

The other bit, actually, just linked to all of those, all of the use cases I’ve heard about have a very good sustainability angle through lower waste, lower energy use, and so on, and we shouldn’t lose sight of that. It’s a real benefit where you can get it.

Christina Cardoza: Now, I want to dig deeper, a little bit, into the security aspect of 5G like you mentioned. Security is always top of mind for businesses and customers. So Sander, what threats are you seeing when deploying 5G in the manufacturing space, and how secure would you say 5G is today?

Sander Rotmensen: I think not many if you do it well. So 5G is an evolution. It’s not a new development, so it’s evolved. It’s coming from 3G, 4G, and therefore they just upgraded the security to be even better. So there is higher encryption on the air interface, for example. But we also need to think about the benefit of having a private 5G network. Because if you think about it, if you’re having a private network, you’re fully separated from the world, and you can decide who gets access to your network. So this is an additional security layer on top of what 5G is already bringing, and also you can think about how you lay out your network. For example, do you really need plant-wide coverage, or do you only need it in the center of your facility, so you don’t even have to go around the Edges? You’re using a licensed spectrum solution. So the licensed spectrum actually make sure you are the only licensee, so you are typically the only one being able to use it. So there’s no one running in with a hotspot, which automatically tries to connect to your Wi-Fi in that case, because you have this licensed spectrum with 5G, which works with a SIM card. So all these things in mind, if you combine that with current security solutions, for example, security, or defense-in-depth where you have different layers of security on top of each other, I think it’s very much doable to create a 5G system which is as secure as a wired system today.

Christina Cardoza: Now, one of the considerations I don’t think we’ve talked about yet is the idea of all of these other technologies advancing on the manufacturing floor, at the same time 5G is rising in adoption. Philippe, how do the emerging technologies like edge computing or AI play a role in all of this?

Philippe Ravix: Today, if we can summarize a plan, there was two main platforms with the cloudification. So there is the cloud, and one of the key strategies is to put all the systems, all the IT systems, even the MES today that we have, there is the customers that have some big projects to deploy and to redevelop MES on the cloud, so all in the cloud. And after that, we have the Edge platform. The Edge platform can be seen as a continuum of the cloud platform in terms of architecture, but also in terms of business. And so the Edge is becoming a key platform for manufacturers, in fact, because it can handle a lot of use cases, and use cases like low latency, critical use cases, but also it will solve– you can solve problems in terms of data regulation, data security, data privacy, but also TCO. If you don’t want to send everything to the cloud, you can optimize at this level.

So the Edge platform, it’s a kind of emerging technology, is now key. It’s not a dream, it’s a reality, it’s key, and it’s something that we deploy in the manufacturing with the continuity from the Edge to the cloud. And with the softwarization, meaning that with the software that you can deploy at the machine level, we consider also that the famous IT/OT convergence is no more existing. Everything will become IT now, even at the low level, at the machine level, and so meaning that it’s interesting, because if there is IT at the machine level, you can deploy analytics, and more and more complex analytics at this level.

And with the 5G and the speed and the new technology, you can compute in real-time, not only statistical process controls, so I would say basic trends, but complex analytics and AI at the Edge level. So 5G, it’s also a technology to leverage edge and AI at the shop floor level, and also at the other cloud level, because you will have this continuity between edge to cloud, and if you want to have a business continuity from edge to cloud, you need to have the right network in terms of latency, business continuity.

Martin Garner: I thought Philippe summed it up very well, indeed. That it’s much more than just connectivity. What we’re talking about really is edge computing in the architecture, and I think that’s where some telcos go a bit wrong. They think of it as just a connectivity thing, whereas really, what they’re selling is a proper software stack as part of a bigger architecture. I’m not sure the telcos fully understand that yet. And meanwhile, other things like Wi-Fi 6, there’s a certain amount of crossover with the features of 5G, and so I think if you own a factory, you should really check out the various options for what you’re trying to do. And I think the answer won’t be… won’t always be 5G. It’s not immediately obvious. So there’s a lot of work these guys need to go through to check out the various options first.

(On screen: 5G Factory Use Cases with illustration of robotic arms assembling a car)

Christina Cardoza: Perfect. So we know where we are and where we want to go, and the benefits and the advantages that 5G can bring to a manufacturing industry if done properly. But I want to talk about what this actually looks like in practice. So Sander, do you have any real-world use cases you can provide to describe how 5G is being adopted and deployed today?

Sander Rotmensen: Yes. So as we said, it’s two years out before it really comes into the factory, but I think I can give you some hints of what would we expect to see first.

So if you look at the factories today, when you talk about wireless LAN network, and we have those, a lot of those, because we have been supplying industrial wireless LAN solutions since well over 15 years currently. So we do have a lot of experience with industrial wireless networking, and what you typically see with a Wi-Fi network, they focus on the single application. For example, there’s one Wi-Fi network for AGVs. There is one Wi-Fi network for the IT services in a factory. There is one Wi-Fi network for an overhead monorail. There is one Wi-Fi network maybe for mobile robots, and that’s all in one factory, and I think what 5G has as an advantage over 5G, from that perspective, they can tie in multiple applications at the same time, really looking at combining AGV systems with, for example, robots where they cooperate in the same network due to the low latency, as you could imagine a car body as you see in the slide here.

(On screen: 5G Facotry Use Cases with illustration of robotic arms assembling a car)

You could see a car flow into the factory, while at the same time, there’s doors being mounted by two mobile robots on the side because they’re in time sync because of next-generation wireless technologies such as 5G, for example.

And maybe to touch up on one of the examples Martin brought up earlier, products with larger amounts of software, it’s one of my key examples of the future, with next-generation wireless technology, we will be ensuring that production of cars, for example, will be sped up. Today, if a car’s manufactured on a classical production line, the car is done. The moment the car is done, it’s put somewhere on the spot, and then someone charges it with some electricity or maybe he fills it up with gas, and then at the same time, there’s a cable being plugged in to load software on a car. Typically, that process today takes 20 to 25 minutes, but in three to five years from now, I expect the amount of data loaded on the car to be doubled. So then we’re talking about 40 to 50 minutes of time where a car is just waiting to have software flashed on it, and imagine you could utilize the already-existing wireless module inside the car, because every modern car nowadays, especially here in Europe, needs to be connected for emergency purposes, you could be able to load the software already on the car during production maybe in the future. So these are some very exciting things happening in industry where we can look at, and I’m curious to see where 5G takes us.

Christina Cardoza: Great, and Philippe, is there anything you want to add here, any examples of use cases that you’re seeing 5G already being applied, or that you see it going to be applied to soon?

Philippe Ravix: I fully agree for the use cases. In fact, the key use cases will be robotics, collaborative robots, AGVs. One of the interesting things is the quality part also, that today that is key in manufacturing, and it’s probably key in different manufacturing, for example, in engineering factory, and what we see today is that today there is a different way to manufacture. You say… I know exactly what are the key parameters that I have to put in my PLC to manufacture with the right level of quality? That it’s something that is predictive, and after you monitor the parameters, you monitor the quality, you follow the curve, the pattern and you say– and you can put some insight. Or it’s quite difficult to define the… to define these critical parameters because everything is moving. And you monitor in real-time the quality. And based on the quality, you correlate in real-time with the pattern of the– with the data coming from the machine’s PLC. You define some patterns and you’re able to adjust the production and the critical parameters to the machine in real-time because you look to the quality.

So you look to the result, the quality, and after, you adjust the process to be sure that you are in the right level of quality. And this is the number two, that it’s something very interesting that the manufacturer tried to do, and for this, it’s a pure 5G because you need to have a strong edge. You need to collect in real-time a lot of data coming both from the quality system but also from the different PLCs, and you need to have AI analytics, machine learning at the Edge level, and to be able in real-time to adjust the configuration of the projection line. And so this is a key use case that you have in life science, in the different industries, and it’s very powerful, and for sure, it’s a 5G use case.

Christina Cardoza: And Martin, you already brought up a couple of use cases, but I want to give you an opportunity to dig a little deeper into some of those examples if you’d like.

Martin Garner: Well, I’d just like to echo what Philippe and Sander have said, that it’s the real-time nature where you have to provide feedback, often within milliseconds. Otherwise, the process– if you’re painting something, and you’re monitoring the thickness of the paint, the process needs to be absolutely real-time. Otherwise, it can go a bit mad, and you get into what’s called a race condition if you’re not careful, and there aren’t many networks that can really do that, especially wireless networks. But 5G is the first one. So I think that it does open up a lot of options here. We just need to get used to using them and getting the use cases right and so on.

(On screen: Private vs. Public 5G with illustration of 5G inside a piece of hardware)

Christina Cardoza: Now, we’ve alluded to the idea of private 5G a little bit already. So let’s take a closer look at how private 5G compares to public 5G, when you should use one over the other. Martin, I’ll turn this one back to you. Since private 5G as of late is also becoming such a big topic, I think there was more spectrum allocated that is making it a little bit more possible. Can you talk to me about what private 5G means, the advantages over public 5G, and what organizations should consider?

Martin Garner: Yes, I think actually the 5G community has got the naming a little bit wrong here because private 5G can mean that you own all of the equipment and run it yourself, or it can mean that you have a dedicated slice on the public network. I think that bit’s a little ambiguous. Personally, I think if you run a factory, mostly people will prefer to own all of the equipment themselves, if they can do that, and to do that they need spectrum of course, and that’s not easy in all countries, but it is coming, as you mentioned.

I think if you go that way, the advantages are, first of all, that the data does not leave the site. This is really important because your factory data is the most competitive, sensitive, private data you have in your organization, and so people really don’t want it to leave the site if it doesn’t have to. You also have full control of the features, the security of the services that you light up on it, and if you talk nicely to your supplier, you can have it as a service, so it behaves like OpEx rather than CapEx. But the key point is that you don’t then have to deal with a telco. And I know quite a lot of people are resistant to dealing with telcos for some of these applications.

But the other point I want to bring out is that I don’t think it’s going to be either public or private. It’s quite easy to see that if you have forklift trucks or logistics, or AGVs, they might leave the site, or the logistics may go off your manufacturing site, and so they might need to roam onto the public network at some point. You might also– if you get a problem in your factory network, you might want automatic failover to the public network as a security measure. So I think it’s going to be both. Not public or private, but some of each. I just don’t think we’re starting there, and I think we’ve got the naming a bit wrong for the moment.

Christina Cardoza: Great insight, Martin. Now Sander, I’m wondering, from your perspective, how you see private 5G in the industrial space, and what you think the importance of this is going to be?

Sander Rotmensen: I think Martin already summarized it very well. I think private 5G is the way to go forward. So if we look at networking today, a wired network, it’s also private. You keep it on your factory. A wireless LAN network, it’s also private. So the next evolution is just bringing another technology in. I think with 5G being the first cellular technology where private networking is standardized, I think this is a huge benefit for industry, and this is why there will be a lot of correction. The flexibility 5G as a technology, with the different options you can set it up with, is also tremendous, and it helps you in setting your network more in a direction of a certain network.

If you, for example, would use a public network, and you probably have done a speed test once or twice on your phone, just seeing how fast it goes, you see that you typically have a very fast downlink, but a very reduced uplink, which is okay because we don’t typically share as much as what we want to know from the internet, like YouTube streaming, maybe having a movie on Netflix while you’re on a train or something like that. Within a factory, the data is not up there in the clouds, the data is on the shop floor. So you want to actually bring the data up into the cloud. So you actually have a way higher uplink scenario as you have a downlink scenario.

So typically, this worked with slots. So if you look at a public network today, you have three downlink slots versus one uplink slot. In a factory, you might want to have this the other way around, and this you can only do in a private network because all these public networks need to be synchronized among each other so they don’t disturb each other. With a private network, you have some more possibilities to set up the network to cater to your needs, or to the needs of your applications, actually. So I think this is really beneficial for industry to have this freedom.

Christina Cardoza: (On screen: Private vs. Public 5G with illustration of 5G inside a piece of hardware)

I love what you both just said, reiterating it, that I feel in the industry, and all industries, we try to pick one solution over the other. There’s no real one-size-fits-all, so there’s a lot of things you need to be looking at when going on this journey.

(On screen: 5G and Wi-Fi 6 with illustration of a cellular tower)

And one thing all of you have mentioned already is Wi-Fi 6. Now, Wi-Fi 6 is coming up at the same time as 5G, so how do these two technology standards work together, and is there one over the other? Philippe, I’m going to turn this one to you.

Philippe Ravix: Thank you, Christina. So yes, both cellular and wireless LAN have introduced new technology generations 5G and Wi-Fi 6. They have more or less the same level of performance, but now I think it’s better not to create a competition, but probably to see how those two technologies can work together.

So in terms of technology, for sure, it’s two different, the technologies. The first one, 5G, is cellular. The other one is wireless. 5G is based on licensed. Wi-Fi, it’s unlicensed bound in terms of frequency. In terms of authentication, it’s not the same technology. The same thing in security, and the use cases, we can see different usage and use cases for each technology. So from my side, and from Capgemini’s side, we can consider that Wi-Fi 6 can be used indoor, in fact, inside the plant, and to provide the different use cases, whereas the 5G can be better to cover large… to cover, I would say, a larger era, outdoor use cases, indoor-outdoor use cases, so these kind of use cases. So, there is a mix, we know that. There are the standards, and the people working on the standards try to have a convergence in technology between Wi-Fi 6 and 5G, but it’s not for today. So today we see more how we can dedicate use cases in a project to Wi-Fi 6 and to 5G, waiting for the full interoperability and convergence of the two technologies.

Sander Rotmensen: And also, I think one of the key things currently why I think Wi-Fi 6 currently has maybe even an advantage is the install base of Wi-Fi networks. So if you look at Wi-Fi, for example, all Wi-Fi 5, Wi-Fi 4, or even legacy devices like the one behind me here, they still work in new Wi-Fi networks, and with 5G, if you install a 5G network now, your end device, your cell phone, if it doesn’t support, for example, 5G, it won’t work in a standalone 5G network. So this is one of the things where I think Wi-Fi has that figured out a little bit better. And with the current install base of over 15 years in industry, there’s way more end device diversity of it all, and I think this is something where 5G needs to really learn and grow an extent, and this will happen, but I think this will take some years before it catches up on that perspective.

And also in the future, I’ll see both technologies going hand in hand. I think with 5G, we speak about different releases. With Wi-Fi, we speak about Wi-Fi 6. Wi-Fi 7 is already not just on the horizon, I think it’s already inside. With 5G, we have the releases coming up, and then 6G is a bit further away, maybe, but all these things, we’ll need to learn from both technologies. We need to try to pick the best of both, and I think in the future, factories will definitely be utilizing 5G besides Wi-Fi 6, and maybe even other wireless technologies.

Christina Cardoza: Now, earlier in the conversation, we talked about having wired connectivity versus wireless connectivity. So I’m wondering how Wi-If, Wi-Fi 6, or 5G come into play during that conversation.

Sander Rotmensen: That is definitely exciting. So I think because these are both next-generation wireless technologies, we will have more opportunities. They are more reliable. They have lower latency. So things which were in the past not possible without a physical connection could now maybe be wireless. So I think there is a big role, and there will be more applications possible with wireless technologies in the future, and I’m curious to see what our customers come up with while using our products.

Christina Cardoza: (On screen: The Future of 5G  with illustration of robotic arm generating lots of data)

A lot to look forward to for the future, and I know we still have a long ways to go before we get there, but I want to try to look into our crystal balls a little bit and see what we can expect. Martin, I’ll turn this conversation to you. How do you hope to see 5G really come into its own over the next couple of years?

Martin Garner: Okay, well, I think one of the key things that’s come out from each of us on this call is that it’s very early days for 5G for factories. There’s a lot to get used to before we’re comfortable and happy to go for a larger implementation.

So we’re already seeing lots of proofs of concepts so people can find their way with it. I’m sure there will be some difficulties implementing it. Nothing ever goes completely smoothly, but also some difficulties in making the business case. I think the suppliers can help a lot with that. But my expectation is that after we’ve had two to three years of practice with it, we’ll start to get the formula or the template right, and then we can replicate it across customers, across sectors, and so on. And I think really, the suppliers can help a lot with that.

Now, my one concern there is that the suppliers typically are 80% engineers and 20% other, because I think the different vectors of growth, once you have 5G, is, of course, you can have more customers, but doing more for an existing customer, scaling the system up and giving more use cases and so on, it’s a really important area, and so I think all suppliers will need a lot of account management guys working systematically on that, really to get the most out of it. Maybe there’s 80% engineers, 20% other. Maybe it’s just the wrong mix for what we need in about three years’ time. So I’m hoping that we can address the account management side, and really work this system and get more out to it that way.

Christina Cardoza: And it sounds like these are all things that no one company can do alone. We’re talking about standardization for the industry. We’re talking about broader adoption, new technologies. So I know Intel has been a big proponent in this space with the rise of 5G. Philippe, I’m wondering how you’re working with them and other partners to really make this a reality for your customers and for the manufacturing industry?

Philippe Ravix: Yes, for sure. So today, complex projects are so complex that it’s a question of ecosystem and partners, with editors, with technology partners, with system integrators, even with the client today, in fact, so I think today, nobody, no company is able to deliver on one project, so there is so much complexity.

With Intel, in fact, we have a strong partnership for many years in the technology side, and we have a specific stream in 5G. So we collaborate in 5G with the engineering– on the engineering side with Intel, delegating experts to work with the internal R&D teams. We have also co-developed with Intel an Edge platform called ENSCONCE that we deploy in manufacturing transformation projects.

So what is interesting, that ENSCONCE first is based on Intel and open-source components. So, for Intel, it’s OpenNESS and OpenVINO, and it’s FlexRAN for the telco part. And the interesting point is that as 5G, private 5G and 5G, it’s also I would say an Edge network. The objective is to create, it’s to converge, and to fusion the Edge platform with the, I will say, 5G Edge in one platform, so meaning that we simplify the Edge infrastructure at the manufacturing level, and we’re also able to leverage analytics and data in the same platform. So it’s also– this platform that we have built with Intel, it’s also a way to simplify and to address the convergence between telco and IT at the shop floor level. And so it’s where it’s very powerful because it’s also one of the key elements that we see in the market. In fact, the simplification and convergence of the technology.

Christina Cardoza: Great. Now, Martin, I’m wondering if you can add a little bit to that, how you see companies like Intel making this space more of a reality, and how manufacturing organizations can work with these partners to make their deployment efforts a little easier.

Martin Garner: Yes, sure, and I mean, we shouldn’t forget that Intel is doing some seminal work on the 5G products themselves, and the use of commercial off-the-shelf hardware and software, empowering them, which is very different from the mobile network world we used to have, and so that’s quite an important direction of travel.

But just to echo what Philippe said, really, it’s always a team game, because these are big, complicated systems, and no one partner can do everything themselves. And one of the things Intel has done really well, especially in the IoT world, but it’s broader than that, is to organize groups of partners around specific use cases to deliver systems, and I think that the need for that is just going to continue for at least the next 10 years. I think Intel is well placed to do that. There are others who are doing it too, of course, but Intel is one of the good ones.

Christina Cardoza: Thanks for that, Martin, and we’re talking about all of these changes happening over the next couple of years. So I’m wondering, how can you actually start approaching 5G today and ensure the work that you’re doing on your efforts and your journey is going to make sense in the future. Sander, can you provide a little bit more insight into that?

Sander Rotmensen: Yes, I think, again, so what we are doing currently is we are currently in the midst of developing our own product solutions for the private 5G area. So that’s 5G Core from Siemens, including a RAN system from Siemens, but we also want to give customers early access. So what we have done so far is we have a couple of proof of concepts which we are running in our own factories, as well as some outside of Siemens, but we’re actually taking this to the next level on the Hannover Fair this year. The 5G smart venue was opened and we actually equipped Hall 9 of the Hannover fairgrounds, in Hannover of course, with a private 5G network and we invited customers there to come rent a hall, and start testing their applications without the need to invest in a full 5G setup for themselves. So they can see if there are any benefits for the application, and we especially do this to help out small and medium enterprises, because imagine, you’re an AGV builder and this is your product, and you’re wanting to sell these maybe to a large car manufacturer and he says, “Yes, I’m very interested in your product, but does it work with 5G?” And you cannot answer. And then these opportunities are very great. They could rent out Hall 9, test their solution with, for example, our 5G network, and they could actually show their customer, “You see, it works with 5G. We have added a Siemens 5G modem is running here, and their private 5G prototype, and this will definitely also work in your facility”. So it’s really giving access to technology to all players in the market, because such innovative solutions need to be accessible by all, and what we need to ensure is that this technology is driven from Germany, of course, because we are one of the industrial leadership countries in the world.

Christina Cardoza: Unfortunately, we’re nearing the end of time for our conversation, but one last topic that I want to touch upon is we’ve been talking about 5G, and it’s very clear, we’re still at the beginning of this, but at the same time, conversations are starting to happen around 6G. So, Martin, I’m wondering, when do we need to start thinking about 6G?

Martin Garner: I would say 6G is not a reason to stop doing anything, and don’t use 6G as an excuse to not buy 5G or not investigate 5G. If we thought 5G was new and young, 6G is extremely new and young. So I would say get on with it, with the systems that we have today, and worry about 6G when it arrives.

Christina Cardoza: Great, it feels like we’ve only scratched the surface of this topic. There’s so much that goes into 5G and so many opportunities ahead to look forward to. So before we go, I just want to throw it back to each of you, if you have any final key thoughts or key takeaways you want to leave our attendees with today. Philippe, I’ll start with you.

Philippe Ravix: So for the takeaway, what we say is that the manufacturer is moving to a manufacturing-as-a-service strategy or concept, in fact. But until now, a company that, for example, has many plants around the world, for plants all over the world, in fact, see each plant has local and siloed capabilities. In the future, a company with 200 plants will see all those plants as production capabilities, and they should be able to in real-time, with the right agility and speed, to say I will produce at this location and this location, I have an issue in this plant in this location because of sustainability, because of the cost of the energy, because of the cost of the raw material, or because of the supply chain, and I need to produce at this plant. So to have this kind of global flexibility, saying all the plants, the factories, as a kind of virtual manufacturing plant, in fact, so what we call manufacturing-as-a-service. So this is how we consider the future.

Christina Cardoza: Thanks for that, and Sander, anything you’d like to add?

Sander Rotmensen: Yes, of course. I think from my side, it’s quite simple. Look at your factory, see if you have use cases, and see if 5G could be a benefit for you. Make sure you start thinking about it early on. Make sure you have a concept where you want to enhance things, where you could do better as you do today with a new technology. And from my personal perspective, please start using it because it’s going to be a tremendous technology with lots of opportunities coming up in the next decade.

Christina Cardoza: Again, last but not least, Martin, what do you want to leave our attendees with today?

Martin Garner: Thanks, Christina. Well, I would say we’ve heard how early it is in the 5G journey, and we also know that industrial buyers have slower buying cycles than many areas, and so I think the next three to four years may feel a bit frustrating for suppliers in the market, and the message really is, don’t allow yourself to get frustrated. Don’t keep chopping and changing the marketing messages because the last one didn’t work. Just keep going and work the relationships that you have, and I think over three to five years, it will start to come really good. But don’t get fed up in the meantime.

(On screen: Thank You Slide pointing attendees to visit insight.tech to learn more)

Christina Cardoza: Perfect. Well, thank you all again for joining the webinar today and for the insightful conversation.

I also want to thank our audience for listening today. If you’d like to learn more about 5G and its role in the future, I invite you all to visit insight.tech where we have a wealth of articles and podcasts on the subject.

Until next time, I’m Christina Cardoza with insight.tech.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

Systems Integrators Consortium Offers Smart-Video Solutions

Security cameras traditionally have played a passive role, providing footage after the fact to piece together an incident. But new systems use sensors and intelligent video analytics for early detection—and even prevention.

Cameras with artificial intelligence (AI)-powered capabilities such as sound and motion detection and people counting can activate immediate responses to a variety of events. Cameras also transmit data to the cloud for review later through video content analytics to drive decisions that strengthen security strategies.

Just about every sector, from retail to transportation to public safety, has started using AI-enabled camera systems, creating new business opportunities for systems integrators and the customers they serve.

But as so often occurs, opportunity brings challenges. AI cameras include cloud, IoT, and edge elements that AV integrators have limited or no experience with.

“A lot of new entrants into the market are bringing cloud solutions into the security and the AV space,” says Matt Barnette, CEO of PSA Network, a global systems integration consortium. “How do you deploy, configure, troubleshoot, and manage these systems remotely?”

That’s where his organization comes in, playing the role of distributor and matchmaker—and much more.

The organization connects member companies with one another and with technology vendors. It helps members choose advanced security analytics solutions and learn the technology so they can fill evolving customer needs and grow their businesses.

PSA supports its members in five major ways—education, access to product, networking, financing, and services. Companies join PSA because “they have a need in one or more of those five areas. Whether it’s financial support or the education, they want to be part of that,” Barnette says.

Alongside access to advanced #video #technology and solutions, education is arguably @PSASecurity’s most valuable role. via @insightdottech

From Tech to Training to Services

PSA maintains relationships with more than 200 technology suppliers, including companies such as Johnson Controls, OpenEye, and Bogen. Managing these relationships, testing products, and validating solutions are key to the company’s mission.

“Integrators will often call us and say, ‘We’ve got this need. What do you guys recommend?’” Barnette says. “They look to us to keep a finger on the pulse of the industry and the new solutions that are coming into the market.”

This requires a delicate balance between gauging market need, product capabilities, and integrators’ readiness for new types of AI technologies. “They don’t like when we bring products in that have no relevance, aren’t really a fit, or are too much on the bleeding edge,” states Barnette.

PSA also has an Emerging Technology Committee, with representation from a dozen member companies, that meets monthly to review new solutions. “They’ll typically get their hands on the product, they’ll do testing, and then report back that, ‘Yeah, we think this is pretty cool. We should bring this on’ or ‘It might be a great product, but it doesn’t really fit the marketplace. Maybe come back again in six months or a year.’”

As part of supplier management, PSA works to navigate supply chain issues, often taking a “squeaky wheel” approach with vendors. “We’re on the phone with our suppliers every week, to remedy inventory backlogs,” Barnette says.

Alongside access to advanced video technology and solutions, education is arguably PSA’s most valuable role. If consortium members don’t know how these systems work, they can’t successfully sell or deploy them. So, in partnership with technology leaders like Intel®, PSA provides members access to certification courses and other training to shorten the learning curve. For example, Intel supplies members with toolkits, products, and services to build solutions leveraging AI, computer vision, and other advanced technologies.

PSA members have other needs besides supplier management and education. For many, access to financing is a priority. The company will handle purchasing, provide 90-day terms, credit limits, and free shipping on products.

PSA also offers services around marketing, business development, HR, and legal, Barnette says. “We have a network of business solution providers that provide those functions, or we do it natively ourselves.”

Partner Ecosystem Collaboration Expands Markets

Networking is another big benefit of membership. The company organizes special-interest groups where, for instance, salespeople from around the country gather and talk about issues they face or talk about solutions or industry happenings.

Sometimes member companies collaborate to leverage one another’s expertise. For instance, integrators that carry physical security solutions pair up with cybersecurity solution providers for specific projects. And this is key to PSA’s mission—as technology evolves, members can meet customer needs through an ecosystem of integrators, solutions providers, and all the goodness PSA has to offer.

“We’ve identified who serves the market and bring them together. We help them help each other,” Barnette says. “And we also help on the back end through relationships with the vendors, so integrators can go and do their thing. This isn’t about product fulfillment. We call ourselves a consortium because we do a whole lot more than that.”

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Product Defect Detection You Can Count On: With Mariner

Listen on:

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

The time is now for manufacturers to start moving toward a smart factory before being left behind by their competitors. But where do they begin? The challenge is that there’s no right answer. To come up with workable solutions, manufacturers must clearly understand the operational problems they face.

For example, many manufacturers still conduct manual defect detection on the production line. This is not only expensive and time consuming, but often leads to inaccurate results. One way to address this issue is to apply machine vision and deep-learning models to smart cameras and automate the quality inspection process.

In this podcast, we will explore changes happening on the factory floor, what makes a smart factory run, and how machine vision and AI improve the product inspection process.

Our Guest: Mariner

Our guest this episode is David Dewhirst, Vice President of Marketing at Mariner, a provider of technology solutions that leverage IoT, AI, and deep learning. Prior to joining Mariner in January 2021, David cofounded marketing agency ThreeTwelve in 2011, where he worked for almost 11 years. At Mariner he leads strategic planning, execution, and oversight of all marketing initiatives.

Podcast Topics

David answers our questions about:

  • (2:15) What is a smart factory?
  • (3:56) How manufacturers have adapted to digital transformation
  • (7:43) Getting started on the smart-factory journey
  • (9:32) Computer vision vs. machine vision
  • (13:47) Importance of AI for product defect detection
  • (17:15) What to do when there is lack of IT support
  • (19:20) Data processing in the cloud and at the edge
  • (22.41) Working in a partner ecosystem

Related Content

To learn more about defect detection, read A Guaranteed Model for Machine Learning. For the latest innovations from Mariner, follow it on Twitter at @MarinerLLC and LinkedIn at Mariner.

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Transcript

Christina Cardoza: Hello, and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Associate Editorial Director of insight.tech. And today we’re talking about machine vision for quality assurance with David Dewhirst from Mariner. But before we jump into a conversation, let’s get to know David. David, welcome to the podcast. Thanks for joining us.

David Dewhirst: Thank you. I’m happy to be here.

Christina Cardoza: What can you tell us about Mariner and your role there?

David Dewhirst: Sure. I am Vice President of Marketing for Mariner. I’ve been in that role full time for about a year now. As a consultant, a little bit longer. And my background is in IoT for marketing. So I have moved from that, generally to Mariner, particularly over the past year, about a year ago, I would say. Mariner, we like to call a 20-year-old startup. So our roots are in writing software for data-warehousing applications. We’ve been doing that for decades, really. About three years ago, we made a big pivot. We saw the opportunity in AI, and in particular how does AI connect to camera systems or machine vision systems on the factory floor, and how can we use that to help manufacture? So about three years ago, we made that big pivot away from being a data-warehouse software provider to really leveraging our expertise in data science and AI to provide these applications for manufacturers that are aimed at reducing their cost of quality.

Christina Cardoza: I love that pivot—looking at manufacturers and how they can use their camera systems to digitally transform, because at insight.tech that’s been a big topic—is, going down this digital transformation, the manufacturing industry has been pressured to change, and one of the things they talk about is becoming a smart factory or becoming smart manufacturers. So I’d love to kick the conversation off there and talk about, what do we mean when we say “smart factory”? What are they trying to achieve, and what goes into that?

David Dewhirst: Well, the way I look at smart factories, and if you ask 100 practitioners, you will probably get 100 different answers, which makes for great podcast, right? So when I talk about a smart factory, the way I think of a smart factory or a connected factory—they’re analogous to me, or Industry 4.0, you’ll hear that—which is really how we take all of the information that’s inherent on the factory floor. How do we connect everything together? And how do we get useful results out of that data? In other words, I like to draw a distinction between data. Data is all around us. It’s everywhere. How do we draw a distinction between data and information? Because there is a distinction there. Data is just like the inputs, right? They’re everywhere. We need to somehow transform that into information so that we can do useful things with that information. So that’s how I look at a smart factory: gathering that data, doing something that processes that data that you’re collecting, and then doing something useful with the results of that data that you’ve processed into information. So that’s how I think of a smart factory or a connected factory, is just availing ourselves of sensors and technology that we haven’t had before to really advance kind of the state of the art in manufacturing.

Christina Cardoza: Yeah, I love that point. Sort of, we know where we want to go, but how to get there and how to connect all these pieces together isn’t as clear as the end goal. So over the last couple of years, manufacturers have obviously been under a lot of pressure to move faster and streamline their operations. How do you think they’ve handled these changes? How well are they doing on this journey to becoming that smart factory?

David Dewhirst: It can be hard. And there is a high project-failure rate in this space. And from my observations, and I believe I’ve seen data that backs this up as well, but from my own observations what happens many times is everybody kind of knows—so, becoming a smart factory or connected factory, these Industry 4.0 initiatives, people—pretty soon these are going to be table stakes. You’re going to have to do this because all of your competitors are doing it. And if all of your competitors are doing it and you’re not, you’re going to be left behind. So pretty soon, smart factory initiative, these kinds of things or projects, are really going to become table stakes, and people sense that. People know that this is just the arc of technology in manufacturing history. We’re always progressing through different technological innovations. This is really just another evolution of that. This is Evolution 4.0, if you want to call it, Industry 4.0.

So people know that and they want to get on board with it, but they don’t quite know what to do a lot of times. So when you see the projects fail, in my observation, it’s because they haven’t thought through actually what they’re trying to do. They know they should do this cool thing, and so they just go out and they find something to do that may be cool, but it may not necessarily be solving a problem.

So our solution is very pointedly aimed at improving defect detection in factory. So that’s one kind of a use case that you can find. Like, “Oh, we’re having quality cost problems.” So then you go find an appropriate solution to ameliorate that problem. That’s how I think any smart factory initiative should proceed. If you’re charged with digital transformation in your factory, find the use case that may not be the coolest thing that you can do, but solves the biggest problem.

Because one of the things you’re going to have to do as a digital transformation guy is to get people below you on board—the engineers who are impacted by it. You’re also going to need to get the decision makers who cut the checks on board with it. If you’re just stuck in the middle with no clear use case that you can say, “Folks down here, this is what we’re doing. It’s going to make your life better. Folks up here, who write the checks, this is going to make your life better because we’re going to save you a lot of money.” And ultimately, the guy at the top is concerned with shareholders, stakeholders. It all comes down to money—somebody, somewhere at the top.

So find those use cases where you can sell your project above and below. And then you’ll be on a clear middle path towards smart factory. And from there, maybe you’ll identify other use cases that you can tackle, but start with the big, hairy problem and see if you can figure it out. And that has not only the benefit of helping you sell, but it also should help you find the expert to solve that particular problem. So if it’s a defect-detection problem, you can go looking for companies, like Mariner, who do that. Or if it’s some other kind of problem—this machine X keeps breaking down, we don’t know why, but we think we can generate analytics—then you go find the expert who does that kind of thing. So clearly identifying your use case will help you sell it, and will also help you solve it.

Christina Cardoza: Now, clearly, every factory floor is different from one another, and every manufacturer is building different things, doing things differently. But are there any best places to get started or to identify those use cases? Any areas within the manufacturing floor that you think would provide the biggest benefit to some of these changes?

David Dewhirst: Talk to the people who have the problems. I mean, if you want to identify the problem, talk to the people who are likely to have those problems. So talk to the people above you, say, “What is costing us money?” Figure out what that is, and then figure out what kind of a solution. If they’re telling you their pain point ahead of time, that lets you go find some kind of a solution for that pain point. You can also talk to the people on the factory floor, like the engineers, boots on the ground, will often be aware of day-to-day problems. And they are just fixing those things or they’re taking care of them, they’re ameliorating somehow, but maybe not even fixing them, and it’s just part of their job, it’s what they do, they show up every day, they’re very good at it, and they make stuff work. They may be suppressing problems that they would love to have a solution for if you just asked them. So a fine first step, if you’re the digital transformation person, is to ask the people above and below you if there is a particular problem that they know about.

Christina Cardoza: Now, I want to dig a little deeper into this product-detection, defect-detection use case that you’ve mentioned a couple of times. I know this is big in manufacturing. They sort of want to find the problem before it happens or before it becomes a bigger problem, and one of the ways that manufacturers have been doing this along the digital transformation is applying advanced technologies like artificial intelligence and computer vision to it. But at the same time, we also talk about machine vision being a big part of this area. So can you talk a little bit about the distinction between computer vision and machine vision? What we are talking about in regards to product-defect detection?

David Dewhirst: Sure. So when I’m talking about it, or when Mariner generally is talking about it, because we have a very specific use case for defect detection in products on the line in your factory, I tend to use them interchangeably: machine vision system, camera system. But when I’m using it in those ways, I’m not talking about machine vision in autonomous self-driving cars, for example. It’s very different use case. When we’re talking about machine vision or camera systems or computer vision in the factory setting, those are typically fixed cameras in a fixed position with a fixed type. They are selected typically by a vision integrator or maybe your engineers, depending if you’re rolling your own, but they’re very bespoke to the production line. So they will be fixed, they will be designed in their placement, their lighting, set up, all the other stuff, it’s very targeted at the specific product on that production line. And that’s what I’m talking about when I talk about machine vision or camera systems in the factory setting—are those very focused, very engineered camera solutions that look at products on the line for defect detection.

Christina Cardoza: From your perspective, what is the importance of machine vision, as manufacturers head towards the smart factory and digitally transform?

David Dewhirst: The importance is in the ability to improve the quality-control process. So, there is the concept of total cost of quality, right? You are either going to spend money on your factory floor to have good quality that goes out the door, or, if you don’t do that, you are going to have a lot of returns, you’re going to have warranty claims. If you are, for instance, a tier-one supplier to an OEM auto manufacturer, your contract is in danger. They do not tolerate defects in things that you’re shipping to them. Not spending money on the quality costs on the factory floor means you’re still going to spend money on quality costs. It’s just going to be in canceled contracts and bad brand association. You’re going to get a reputation for bad quality, and that’s costly. You’re going to have returns and warranty claims. All those things are costly. So either way you look at it, you’re going to have to control costs.

So the better way to do that, rather than paying the cost of your damaged brand or returns or other things like that, the cheapest, highest ROI way to do that is to do your quality work on the factory floor. And this isn’t a new concept; we’ve had in quality inspectors forever, ever since the first assembly line in Dearborn, Michigan, you’ve had guys at the end of the line looking at stuff for quality controls. It’s not new. Machine vision systems or camera systems to help do that have also been around for decades, literally decades. They are useful because they can present a very consistent look, piece to piece, part to part to part to part. It always looks the same because the camera, as I was explaining, is very fixed and situated. So the quality guys can just look at an image now, instead of standing there turning a thing over. Although in a lot of factory settings they’re still doing that. They will pick up a piece, and they will turn it all different ways and hold it up to the light and do a bunch of other stuff. That does still go on. But machine vision has made a lot of strides over the past few decades to eliminate that.

What they haven’t done is use AI. Because AI has just in the past handful of years become an actual useful thing that you can apply to real-world situations, like machine vision problems. So that’s kind of where using machine vision for quality comes from. It comes from the need for it. Like most things in manufacturing, the need was found, the need was addressed, but there are situations where it just doesn’t work quite right, which is where the AI part comes in.

Christina Cardoza: Great. Yeah, that was going to be my next question. So you mentioned we’ve had machine, we’ve had these cameras that have been able to support the quality assurance, quality inspection in manufacturing for years now. But how does AI take it to the next level, elevate it, and help streamline operations, or like you were mentioning, reduce some of that manual work?

David Dewhirst: So it may be useful to back up a little bit. So for the past several decades, machine vision systems have used traditional problems, and they’re very good at solving binary problems. Let’s say, so is there a hole or is there not a hole in this piece? That’s a binary thing. Yes or no. Did we put the sticker on the engine block? That’s a binary thing. We did or we didn’t. Very easy using traditional programming, which relies again on those true/false things to come up with a true/false answer.

But what happens when your problem isn’t binary? What happens when instead of just saying a hole or not a hole, what happens when looking at, for example, is this an oil stain on fabric or is it a piece of lint? They’re both kind of fuzzy, right? So you can’t just say, is there a blotch or not? Because a piece of lint will present as a blotch in that machine vision system, and you can’t just draw a line. Maybe the stain is a little bit fuzzier, the lint is less fuzzier, so let’s draw a line, but that’s still arbitrary, right? You’re drawing some arbitrary line between the fuzziness levels. What happens if the lint is a little bit fuzzier than where you drew the line? That gets called defect. What happens if the stain is a little less fuzzy than you thought? That will escape because you might think that it’s lint. Very hard to tackle those fuzzy problems using traditional programming languages, and that’s where AI comes in now.

With machine learning, deep learning techniques, you don’t need to draw an arbitrary line for a true/false answer. You can train the AI just with enough samples of stains and lint. The AI will learn on its own what the difference is. So you don’t need to apply traditional programming techniques to these fuzzy problems. So the AI can come in and solve those kinds of challenges that weren’t really solvable before using traditional programming. So that’s the value of AI, and in particular AI to the machine vision problems. They just haven’t worked—they’ve never worked in these kinds of fuzzy situations. So with AI, you can oftentimes get your machine vision system, your camera system, to do what you hired it to do and it just has never done a good job at it because it hasn’t had the capability to do it. So that’s kind of the value add of AI, just to get your machine vision to do what you thought.

Christina Cardoza: That’s a great point. And I think the benefits of AI have been proven. So it’s no longer a hurdle, like you mentioned at the beginning, trying to get the top leadership to go on board on applying or implementing AI to these machine vision cameras. I think the next hurdle has been, do we have the in-house expertise and support to be able to do this? So manufacturing floors typically don’t have data scientists on hand or advanced AI developers, so it can be intimidating trying to apply this to machine vision cameras or on the factory floor. How can manufacturers do this without sort of the IT support available?

David Dewhirst: In our particular case, with Mariner, the only thing that we ask for is that you—obviously you’re going to need the image-formation system with pictures of your products. The only thing that we then ask is—we use a tool—we just ask that your quality guys take all the images that you have of your product that show defects, upload it to the tool, and they draw a little box around it. So that lets your quality guys do what they’re good at. They know how to do it. They’ve been doing it for decades, as we’ve talked about, looking at these images and pointing at defects. This is something they’re trained at, something they’re skilled at.

We can take advantage of that, and let us do the part that we’re good at, which is the data science. So the quality engineers will point out defects or not, and then our data scientists will build that AI model. So for our particular customers you don’t need data science guys on the factory floor. We’ve got you on that. And what will be delivered to you is the solution with the AI model built by our data science guys. So you don’t need to have data science at all. You have to have the quality guys who can point out the difference between defect and not defect. That’s our particular answer to that.

Other companies with other solutions and other spaces, sometimes they will ship prebuilt models. Those may or may not work, depending on how closely the preshipped models match what your particular situation is on the factory floor. If those align—what is shipped with the camera aligns with what is seen on your factory floor, you might not need us at all. You’ll be golden, and you still might not need to be a data scientist. Where the prebuilt models don’t align very closely with what your actual product is, that’s going to be a good use case for us to actually just provide you the data science as part of our solution.

Christina Cardoza: So where is data collection and all the data processing happening on the factory floor? And with a solution like Mariner, is this being processed to the cloud? Or, I know a lot of manufacturers have started looking at the edge.

David Dewhirst: It depends. So just in general terms, a lot of things happen in the cloud, and that’s because big data is a hot topic. And if you have 10,000 sensors all over your factory and you’re generating terabytes of information, you are probably not going to have a data farm on your factory floor, right? You’re going to have to do that in the cloud. So you have all those sensors collecting all that information, and it gets pushed up to the cloud.

In machine vision there’s a little bit less reliance on the cloud. We talk about Mariner and our Spyglass Visual Inspection solution just as being a useful talking point. Perhaps SVI, Spyglass Visual Inspection, SVI, you’ll hear me refer to it, is actually a hybrid solution. And that’s because, for the real-time defect-detection work, we don’t have time to make a round trip to the cloud to do that. So to keep up with the production line, that’s all happening in a server in the factory. We also, in those particular mission-critical applications, think carefully about doing everything in the cloud because it does happen—and I have literally seen it happen with my own eyes—a backhoe cuts the T1 cable outside the factory, you’re not on the cloud anymore. You’re just sitting on a piece of dirt. So think carefully about doing mission-critical stuff completely in the cloud. And that’s why we’re doing our actual defect detection and the AI-inference work on the factory floor. Even if you lose internet connection your production isn’t shut down. Your factory isn’t shut down. We can continue to do that inference.

Now we do also make use of the cloud. So SVI is designed to run headless without anybody standing around, but engineers can go back and review the decisions that the AI has made. If the AI got something wrong, the engineers can correct it. That will go up to the cloud. So we are accumulating some stuff in the cloud. And eventually, if the AI model needs to be retrained, we can do that in the cloud because it doesn’t require real-time connectivity to do that. And then it can get pushed back down to the edge. So SVI in particular is a hybrid, edge/cloud solution.

And I think that’s a good way to go for any application that you have that has a component that needs to be done real time and quickly. The edge is really where you’re going to want to do stuff. The cloud becomes a great place to do, well, data storage obviously, but all of the more computationally expensive things that take a while to run—that can all be done in the cloud, and that’s fine. So yeah, I would encourage anyone just to look at a hybrid scheme instead of entirely cloud—or entirely edge of course.

Christina Cardoza:  So, cloud/edge computing, AI—these are all huge fields with lots of players in this ecosystem. So I’m wondering, with Mariner’s SVI solution how you guys work with other partners in this ecosystem to make it all come together.

David Dewhirst: Sure. In terms of our particular solution, kind of start to finish, we have a partner ecosystem that we work with, let’s call it. Number one, we don’t sell cameras. So we are AI software as a service solution. If you need cameras, we work with a vision integrator who will get you the right camera. But, by and large, we don’t care what the camera is. We can make use of any camera that you already have, or work with you. So, upfront, we work with the vision integrator if we have to, that’s partner number one.

Partner number two, we work very closely with Intel® and Nvidia, both on the factory floor. And that’s because we’re doing this real-time inference work with AI on the floor—we need some powerful processing capabilities. So our box that we ship, so it’s AI software as a service that ironically will arrive to you on a server box. So that is included as part of the solution. It’s the server box. We do that because we can build those server boxes to do what we want. So we have Intel® Xeon® chips in there for, like, really muscular, beefy processing. We have Nvidia cards in there for extra GPU power. So we partner in that sense with those hardware people—what’s the best solution, like, what’s the best GPUs we can get, what’s the best CPUs we can get to handle this workload? Those are partners on the factory floor.

We also partner on the cloud with Microsoft. So all the cloud stuff that we’re doing is typically in Azure. And that gives us—there’s a lot of prebuilt services and a lot of other capabilities in Azure that we can make use of and be certain about security and speed and all the other stuff that you might worry about in the cloud. So from front to back or top to bottom, however you’re going to architect that in your mind, we do have a complete array of partners that we are going to market with, for sure.

Christina Cardoza: Great. And I should mention the IoT Chat and the insight.tech program as a whole are owned by Intel. So it’s great to see how technology is being used with companies like Mariner and on the factory floor. You mentioned GPUs. I know with Intel’s latest release of OpenVINO™, their AI toolkit, they’ve sort of made it easier to work with GPUs or to not work with GPUs. Are you guys leveraging OpenVINO in this solution at all as well?

David Dewhirst: That’s a question for the folks down in Prod/Dev. And let me say, I don’t know, but I would be surprised if they weren’t, because they’re always looking at what is the next best thing that will let us scale up capabilities for customers. And I know the OpenVINO and stuff like that will help that. So I’d be surprised if they weren’t, let me just put it that way.

Christina Cardoza: Absolutely. So unfortunately, we’re nearing the end of our time, and I know this is just such a big topic, we could probably go on for another 30 minutes to an hour, but before we go, David, I want to make sure we got to cover at least everything as it relates to this conversation. So is there anything else you wanted to add, or anything you feel like we didn’t get a chance to go over yet?

David Dewhirst: No. I just—I will encourage people—again, you may not need Mariner’s solution, but you are going to need to move forward with industrial IoT and AI—probably you may or may not need AI given your use case—but you are going to need to have industrial IoT of some kind. It’s just too big and too out there, and your competitors are doing it. So I just encourage people to think about the use cases and the situations that are right for them. Find that hook, and get in, and don’t be the last guy. Find the area that will give you value, and move forward with that, because you’ll be ultimately happy that you did it on a lot of fronts, for sure.

Christina Cardoza: Great, great final takeaway there for our listeners. With that, I just want to thank you again for joining the podcast today.

David Dewhirst: Yeah, my pleasure. Thank you for having me.

Christina Cardoza: And thanks to our listeners for tuning in. If you like this episode, please like, subscribe, rate, review, all of the above on your favorite streaming platform. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Overseeing Critical Infrastructure with Video Analytics

In March 2022, vandals created havoc in an Oklahoma electrical substation when they damaged a transformer and cut off power to thousands of residents. This is just one example of the many intrusion incidents U.S. utilities deal with constantly, causing between $250,000 and $1 million in damages annually.

And intrusions are not the only challenges utilities face. The power utility industry deals with a range of complex challenges that put an immense strain on one of their most valuable assets—operations.

The State of Today’s Critical Infrastructure

First, there’s the problem of aging infrastructure. A whopping 70 percent of power transformers and transmission lines in the U.S. are older than 25 years. And 60 percent of circuit breakers are 30 years or older, according to John Smerkar, Global Director of Smart Spaces and Lumada Video Insights Marketing at Hitachi Vantara, a leader in IoT and digital innovation.

The creaking infrastructure could not be coming at a worse time. Accelerating climate change is straining the system, leading to more frequent high-impact storms and more extensive damage as a result of each storm.

But not only is the infrastructure outdated, so is the process to maintain and manage it. Much of the infrastructure inspection process is still manual—which means by the time operations teams catch any problems or issues, it’s already too late.

All of this results in increasing frequency and length of outages. For instance, U.S. customers on average experienced more than eight hours of outages in 2020.

And demand for electricity is also expected to increase with the move toward electric vehicles—a 38 percent increase in the U.S. is expected by 2050—placing constraints on critical infrastructure.

Sealing off the challenges is the industry’s talent shortage. More than a third of the country’s 400,000 electric utility employees are headed for retirement, Smerkar explains.

As a result of these complexities, utility organizations struggle to manage and monitor their critical assets.

“COOs are asking, ‘Which core assets of my critical infrastructure are going to fail?’ ‘How will I continue to reduce my operational cost while increasing my inspection frequency?’” Smerkar says.

Given that the pressures are relentless, power utilities could use a smarter approach to resource allocation. The answer: intelligent infrastructure monitoring.

Intelligent infrastructure monitoring leverages AI and #video #data and moves it into the realm of analytics to help #utilities get ahead of problems. @HitachiVantara via @insightdottech

Introducing Intelligent Infrastructure Monitoring

Today, most power utility stations monitor their facilities using video cameras. The problem is that the traditional way of using these closed-circuit televisions (CCTVs) is not an effective deterrent.

For example, maintenance staff are not always aware when intruders enter restricted areas and realize the problem only after the damage has already been done. Similarly, utilities cannot accurately predict when a circuit breaker is about to blow. Traditional methods merely help in forensic analysis after intrusions have occurred or when a circuit breaker has already malfunctioned.

Intelligent infrastructure monitoring leverages AI and video data and moves it into the realm of analytics to help utilities get ahead of the problem.

For example, with the Hitachi Intelligent Infrastructure Monitoring solution, utilities can cut down on intrusions like the one in Oklahoma. They can monitor sites and equipment from virtually anywhere. Intruders crossing into restricted areas are recorded and authorized staff are notified for immediate action.

In addition to monitoring for intrusions, utilities can conduct remote monitoring and inspection of assets and move on to autonomous operations in the future. With the solution, they can check into the health of each substation more frequently, and with advanced video capabilities inspect critical infrastructure at all angles by panning, tilting, and zooming in. This allows them to conduct preventive maintenance of assets before problems escalate into malfunctioning equipment and cause days of power outages.

In essence, utilities are finding that they can bank on a familiar entity, video data, and use it more proactively to solve their most pressing challenges, instead of traditional methods that enable them to react only when something has already failed.

So promising is this approach that the surface inspection and vision market is forecast to increase at a compounded annual rate of 7.13 percent and reach $5.95 billion by 2028.

Putting the Pieces Together

Improvements in technology have been spurring growth in the surface inspection and vision sector, according to Smerkar. “The reason behind the overwhelming effectiveness of intelligent infrastructure monitoring is that artificial intelligence and machine learning and computer vision have been making impressive strides as well,” he says.

As part of intelligent infrastructure monitoring, power utilities use the live streams from their CCTV feeds as video data, along with additional data gathered from humans, drones, or robots. Utilities also conduct regular thermal scans on assets to ensure critical equipment is not overheating and at risk of failure. Data from these various edge points feed into the cloud for processing. Hitachi delivers insights from these analytics through its visualization suite.

Using the Hitachi Intelligent Infrastructure Monitoring solution, operators at power utilities can “channel worker resources more efficiently and obtain real-time views of critical infrastructure components from anywhere at any time,” Smerkar says.

In addition, they can receive notifications when a reading from a particular piece of equipment is out of its normal operating range and drastically reduce system downtime. Equally important, says Smerkar, “all insights are through a single pane of glass.” Whereas before operators had to log into multiple systems to keep track of equipment, the visualization suite corrals all real-time data insights into one place.

Hitachi’s visualization suite, which runs on Microsoft Azure stack, uses Intel® NUCs on its smart cameras. All edge gateways, for preprocessing of video data, run on Intel, Smerkar says.

The video data-driven solution need not be restricted to use cases for electricity substations alone. “Critical infrastructure is everywhere,” Smerkar says. “It could be roads, highways, bridges, stadiums. The same principles apply,” he adds.

Hitachi’s Smart Spaces, which is part of the Intelligent Edge program, uses video insights (SSVI) to monitor and inspect assets across a variety of “spaces”—airports, bridges, stadiums, theme parks, and in multiple industries like healthcare, education, and manufacturing. For example: “Putting a camera in front of a rail car as it rolls along the rail helps look for cracks,” Smerkar says. Using video data allows a variety of industries to “see” where the fault lines lie.

The key word in all of this is “intelligent,” Smerkar says. “Instead of having workers going out and performing inspections on assets that don’t need to be monitored right now because they’re in good health, we can have them tackle critical infrastructure that is close to end-of-life, where they need to fix or replace these units to prevent downtime,” he adds.

Using the resources you have strategically instead of a one-size-fits-all approach is a proactive way to meet challenges head-on. And with increasing strains on aging infrastructure, these prescriptions come not a moment too soon.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Talking Up Self-Serve Patient Check-In Kiosks in Healthcare

So you walk into a doctor’s office and are greeted by—a kiosk?

Such is life in the post-pandemic 21st century, where every process is tech-optimized, and every person is germ-aware. And where, if done right, a kiosk can be both.

Healthcare AI technology has already been commonplace in the industry, but bringing it into the waiting room can help improve the patient experience.  And more and more, self-service kiosks integrate sensory capabilities that make them more “service” than “self.”

For instance, recent advancements in computer vision now allow kiosks to “see” the subjects of their interaction and, in some cases, even detect human emotion. There’s also touch sensitivity, which most kiosks already support as an input. And now, kiosks can keep up with conversational flow.

“Instead of going up to a receptionist and having to wait around for them to finish the phone call or finish entering whatever they need to do on the computer, patients can just go up to the kiosk. In the same way they would do when speaking to a receptionist, they just say their name or date of birth and get themselves checked in,” says Salwa Al-Tahan, Research and Marketing Executive for Sodaclick, a digital content and AI experience company.

Enhancing Healthcare Kiosks with Conversational AI

By far the biggest gains in human-machine interaction over the past several years have come from natural language processing (NLP) services. Integrated voice control engines now permit self-service kiosks to offer hands-free input for those unable to see or touch equipment or in situations when doing so is undesirable. 

And these voice services can support processes like patient check-in from end to end, which is also a big win for relieving the burden of administrative tasks for staff and creating shorter wait times for patients. And in the big picture, it improves the entire healthcare infrastructure when medical staff are overwhelmed by high numbers of patients and salaries are on the rise.

Not only do voice services make kiosks a suitable substitute for regular staff, but in many cases, they perform at least as efficiently as human personnel and carry far less risk of spreading illness (Video 1).

Video 1. Natural language processing (NLP) allows self-service patient check-in kiosks to offer a touch-free, omnichannel experience to patients—increasing efficiency and lowering health risk. (Source: Sodaclick)

Let’s Talk About Self-Service Voice Check-In Systems

But while the addition of conversational AI platforms certainly makes self-service kiosks “smarter,” they can’t simply be plugged into the surrounding infrastructure and work. It’s important to note that the kiosk itself is just the front end of a much more intricate, cloud-connected system.

Integrated #voice control engines permit self-service #kiosks to offer hands-free input for those unable to see or touch equipment. @sodaclick via @insightdottech

While the kiosk interface must be intuitive and engaging, most of the conversational AI processing in systems like a healthcare self-service kiosk doesn’t take place on the kiosk at all. User voice inputs are captured by the kiosk on the edge, but then transmitted to cloud-based automatic speech recognition (ASR) and NLP models that perform a few steps in quick succession.

  • Translate the voice input from speech to text (STT)
  • Cross-reference the text-based input with hospital databases
  • Convert the output using text-to-speech (TTS) engines
  • Send the result back to the kiosk where it is delivered as an audio response to the user

Early in the process there can also be a confirmation step where the cloud models confirm the received voice input with what was captured by the kiosk. But clearly, most of the intelligence is remote from the front-end user interface, and connecting it with voice services and databases containing relevant data like patient and schedule information requires significant integration work.

Combining all this technology, tuning it to a specific use case and technology environment, and doing so without violating HIPAA or GDPR regulations, is beyond even the most advanced hospital IT staff.

Voice-Enabled Kiosks Spark Conversation

A better alternative is to leverage off-the-shelf technology packages that optimize patient services, reduce cost, and lower health risks in medical facilities. For example, the Healthcare Voice AI Assistant RFP Ready Kit from Intel®, Arrow Electronics, and Sodaclick streamlines the deployment of dynamic patient check-in services on omnichannel kiosks.

“We’re not asking customers to get rid of their current self-service kiosks and buy new ones, but rather upgrade by adding the voice layer with our SDK solution. You can get the SDK, work with our development team, and deploy voice AI ready solutions very easily,” Al-Tahan explains.

Sodaclick’s AI Voice SDK brings conversational AI to digital-signage touchscreens by integrating with onboard HTML or JavaScript code. It also helps overcome ASR challenges like background noise.

Sodaclick offers the SDK with support for more than 85 languages through its partners like Arrow Electronics, an Intel® Solutions Aggregator. With all the components of an immersive kiosk, the Healthcare Voice AI Assistant can identify event triggers and convey custom voice messages in real time.

Aside from the SDK, the Voice AI Assistant can include:

  • An HTML5 content creation platform based on a cloud graphics editor that lets hospitals create complex slides using rich fonts, animations, and external data sources without writing any code.
  • A digital-signage system outfitted with an Intel® Core i5 processor with 16 GB of RAM, microphone and speaker arrays, and a proximity sensor for detecting approaching users.
  • A front-end application that supports omnichannel inputs and integrates with databases using APIs.
  • A voice AI service that hosts all speech services in the cloud and delivers fully private, HIPAA- and GDPR-compliant outputs back to the cloud.

The Sodaclick content creation platform allows visual display content to be created and updated on the target kiosk in real time. And the AI Voice SDK integration with the front-end application helps support connections with the voice AI service and patient check-in databases for a seamless user experience.

Of course, one of the greatest concerns of healthcare officials considering new technologies is security and data privacy. These also happen to be one of the greatest strengths of the Healthcare Voice AI Assistant.

The entire system is based on a containerized Docker architecture, which keeps applications and data segregated from each other. It doesn’t record or log any data, but rather pushes requests and responses across a microservices architecture using the Intel Edge Software Hub.

With built-in considerations for security healthcare data, connectivity, and latency, the Edge Software Hub helps keep operations flowing and reduces the risk of compromise by orchestrating data flows on private servers behind hospital firewalls.

Conversational AI at Your Service

The healthcare space is just one market where Sodaclick’s Voice SDK is already in operation. Grocery stores and quick-service restaurants are also leveraging the technology to enhance customer experience, an indication that conversational AI already penetrates entire swaths of everyday life.

With health consciousness on the rise and changing work preferences, you’ll probably serve yourself more often. But don’t worry, you’ll have help from talking kiosks.

“Voice is the future. And we will be seeing a lot more deployments in all sorts of settings. Everyone’s just been scratching at the surface till now,” says Al-Tahan.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

This article was originally published on July 5, 2022.

Empowering Traffic Operators with AI Video Analytics

In 1999, a transport truck traveling through the 7.2-mile-long Mont Blanc Tunnel between France and Italy caught fire, fatally trapping 39 commuters. While the tunnel was fitted with various video surveillance cameras, traffic operators were not alerted to the problem until drivers started calling in reports—and by then the damage was already done.

Unfortunately, this problem has continued over the years as European nations experienced an alarming increase in road deaths, especially in tunnels.

That’s why in 2004, European Union member countries decided to issue minimum road safety requirements for tunnels over 500 meters long, known as Directive 2004/54/EC. Part of the requirements included installation of safety cameras. The idea was that with potentially dozens or even hundreds of cameras in a single tunnel, officials could monitor things like wrong-way drivers, smoke/fire, stopped vehicles, or pedestrians on the road.

But, of course, any good transportation system manager will tell you that streaming traffic footage from camera endpoints generates too much data for human operators to analyze manually. And Directive 2004/54/EC created a reality where multiple streams per camera and multiple cameras per tunnel across most of a continent would need to be analyzed by someone. Or perhaps, something.

Transit authorities need to automate camera footage analysis as much as possible. They need AI video analytics to monitor roadways for potential safety events at the edge.

Traffic Cam Monitoring in Real Life

To give you an idea of Directive 2004/54/EC’s scale, let’s look at a single roadway. The Boulevard Périphérique Nord de Lyon (BPNL) is a 10-kilometer toll road in Lyon, France that connects to three major highways. It consists of four tunnels that span a total of 6 kilometers, two viaducts, and no fewer than 200 traffic cameras.

The BPNL is operated by the Société D’exploitation Du Boulevard Périphérique Nord De Lyon (SE BPNL), whose 50 employees are responsible for managing the toll booths, maintaining the road infrastructure, and monitoring camera feeds for incidents that could present hazards or disrupt traffic flows.

Transit authorities need #AI #video analytics to monitor roadways for potential safety events at the edge. @SprinxTech via @insightdottech

It’s easy to see why this won’t work without automation. If every SE BPNL employee monitored camera footage around the clock, they’d still have to watch four camera feeds simultaneously. Instead of human monitors, the company tried computer vision camera monitoring software based on traditional image processing algorithms. But even these struggled to identify people, objects, and events with enough accuracy to avoid overwhelming operators with false-positive alarms.

“This kind of technology can understand that a blob of pixels is moving, but it is not able to identify a blob of pixels as a pedestrian. It may understand that a blob is the shape of a person, for example, or moving at the speed of a person, but it cannot identify the object as a person,” Renato Clerici, Co-Founder and CTO of Sprinx Technologies, a mobility video analytics company.

To overcome these challenges, SE BPNL turned to Sprinx, which uses neural networks to detect and recognize vehicles, people, and pedestrians in real time through its TRAFFIX.AI automatic incident detection (AID) software.

“AI and deep learning are much more accurate than standard computer vision technologies,” Clerici states. “Neural networks are trained to identify, recognize, and detect people or vehicles in a picture. And with 3D object tracking technology, it allows us to provide very real, very accurate detection and reduce a lot of false alarms.”

The OpenVINO Road to Automated Incident Detection, Everywhere

Since deploying in the spring of 2020, TRAFFIX.AI has helped SE BPNL reduce false alarms significantly, thanks to high-fidelity analysis that can detect everything from wrong-way drivers, slowdowns, and stopped vehicles to spilled cargo and even smoke or fog (Video 1).

Video 1. TRAFFIX.AI has helped SE BPNL reduce false positives by supporting high-fidelity video analysis of events like sudden loss of roadway visibility. (Source: Sprinx Technologies)

From an end user or system integrator perspective, TRAFFIX.AI’s built-in intelligence makes the system easy to configure and calibrate for specific use cases like those mentioned above. And although the platform’s 3D object detection software is proprietary to Sprinx and the MobileNet SSD neural nets were developed internally using TensorFlow, the software is optimized for edge execution using the Intel® OpenVINO Toolkit.

This means TRAFFIX.AI can run on any CPU-, GPU-, FPGA-, VPU-, or other Intel-based hardware platform, be it an edge computer, PC, or server. It can even connect to intelligent transportation systems (ITS) out-of- the-box for a truly high-performance, plug-and-play AI video analytics deployment experience.

“OpenVINO plays one of the main roles in our solution because it runs the neural networks we are using to detect and identify vehicles and pedestrians,” Clerici says. “You can use the existing hardware and it works. It’s very easy to connect a PC or a server with our software to the existing network infrastructure and process the existing camera feeds. The only limit is the number of cameras that can be processed on that hardware, but if you need to process more cameras, you can just add a new PC.”

In installations like the BPNL, Sprinx is running TRAFFIX.AI on Intel® Core i9 and Intel® Xeon® Gold processors that support up to 24 cameras at once. But as Clerici notes, smaller deployments can leverage endpoint targets based on more power-efficient devices like Core i5 or Core i7 processors that support up to 10 simultaneous video streams.

Smart-City Solutions: Already Ready for AI Video Analytics

Sprinx’s TRAFFIX.AI software has already transformed some 15,000 computer vision cameras across the roadways of Europe into intelligent video analytics data capture devices. And with the ability to deploy their software on almost any hardware, they are collaborating with Intel on a not-so-distant future in which the AI analytics software can be deployed directly on computer vision camera endpoints that send real-time alerts directly to on-premises servers or cloud platforms.

That would turn the billions of already-installed cameras around the world into potential AI vision endpoints. From smart-transit systems to smart-traffic data collection to smart-city solutions, the possibilities become limitless almost instantly. And it’s all possible because of the enabling capabilities of the OpenVINO toolkit.

But for now, giving operators like SE BPNL accurate road condition information in real time to save lives is a great place to start.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

ATM as a Service (ATMaaS) Transforms Banking

Data breaches. Dated customer experiences. Unavailable ATMs. These are three big issues for financial institutions (FIs) when it comes to brand reputation and customer loyalty.

In recent years, FIs have seen a shift in consumer trends around the use of self-service technologies like ATM, ITM, or digital banking. Now, there is a permanent shift in digital banking behavior as consumers conduct more of their banking needs at their convenience and outside the bank branch.

Ensuring a secure and working self-service channel is vital. But running a multitude of self-service technologies that are always available, with the latest software updates and sufficient cash to meet customer demand, requires specialized skills. And most FIs still own and operate their own self-service channel.

Like many industries, FIs look to focus more on core competencies and simplify operations, saving costs and improving efficiencies along the way. One way this is achieved is through “As a Service” outsourcing models. NCR, a leading enterprise technology provider that helps run self-directed banking, has launched its ATM as a Service (ATMaaS) offer, helping FIs better manage their self-service channel.

“More than ever, consumers value the convenience of being able to complete transactions at the ATM instead of in the branch—and most transactions can now be completed using self-service. But a poor ATM experience can directly impact customer satisfaction,” says Terry Duffy, Senior Vice President and General Manager of Self-Service Banking for NCR Corporation. “With ATMaaS, FIs can free up resources and focus on other priorities, while still delivering an exceptional ATM self-service experience.”

“With #ATMaaS, FIs can free up resources and focus on other priorities, while still delivering an exceptional #ATM self-service experience.” – Terry Duffy, @NCRCorporation via @insightdottech

Deploying Digital Banking Services

Outsourcing the self-service channel through ATMaaS can be especially impactful for smaller FIs. “Smaller FIs appreciate the ability to scale and compete with larger financial institutions;” says Duffy. “They’re not engineers, they’re not software developers, and they have limited resources. ATMaaS combats this by driving efficiency in resources, costs, and management.”

One FI on the U.S. West Coast recently turned to NCR for help with its fleet of 26 ATMs. Two key employees had left the organization, and the FI was dealing with compliance challenges and dated equipment maintained by a third party.

“Within four months we had deployed completely new technology,” says Duffy. “The operational burden was taken off their hands and modernized technology meant performance and availability improved. ATMaaS replaced the multiple vendors they were managing with a single point of contact, allowing the FI to focus on what’s most important for them—serving their customers.”

Digital-First Solutions Transform Operations

NCR’s ATMaaS is a turnkey program that includes the software, services, and hardware required to deliver a reliable ATM channel. To enable digital-first experiences today’s customers expect, the ATM channel includes multi-touch, digital advert panels, and improved transaction times. The ATM units run on embedded Intel® processors that facilitate reliability and speed during peak hours. Intel processors allow for a compact, low-power, and rugged computing device, suitable for different temperatures and environments (Video 1).

https://www.youtube.com/watch?v=EP8v8SxBQyM

Video 1. NCR ATMaaS allows FIs to outsource the management, operation, and ownership of the ATM network to free up resources and improve customer service. (Source: NCR Corporation)

Helping to modernize the technology that delivers the digital-first experiences customers demand was one key objective to highlight the benefits of digital banking. Just as important as a single contract and monthly agreement, they hand the burden of accountability to a trusted partner and no longer need to worry about compliance, security, and availability.

Some FIs have outsourced select aspects of the ATM channel, such as hardware maintenance, cash replenishment, and software maintenance, while still maintaining governance of other elements. ATMaaS is an end-to-end solution, consolidating everything under a single agreement, provided by one trusted partner.

And this saves time trying to coordinate a variety of suppliers and enables FIs to focus on their customers. Importantly, NCR provides maintenance and continuous monitoring, ensuring that the network of ATMs operates correctly.

Another advantage of an As a Service approach is keeping up with compliance changes, which can be difficult. The genesis behind ATMaaS is the simplification of operational burden to drive business efficiencies, and the company has launched the program in more than 30 countries, with a standard approach to support the operations of customers’ ATM networks.

ATMaaS also allows FIs to transform their operations. “How consumers use branches has changed significantly over the last few years, leading FIs to transform the traditional branch format. Some branches are going cashless, using self-service technologies, like the ATM, for routine transactions, as well as other functionality such as account opening and bill payments,” says Duffy. “This allows tellers to shift to more high-value advisory and consultancy roles.”

Moving to an As a Service approach provides benefits to the FIs’ customers, too. “If an ATM goes down, we take responsibility for identifying the machine, the problem, and getting it fixed in a timely manner,” says Duffy. Under the ATMaaS model, NCR commits and delivers a high-availability outcome across the customer’s network. This provides the end consumer with confidence they can perform the transactions they require, resulting in an enhanced customer experience.

“ATM as a Service is increasingly changing the way companies want to do business,” says Duffy. “By using one partner, FIs have a single point of contact for vendor management and day-to-day requirements to keep the ATM channel compliant, available, and secure. The FI can focus its resources on strategic business initiatives, whether that’s bringing new products to market or enhancing the customer experience.”

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

This article was originally published on June 30, 2022.

Manufacturers Unlock AI at the Edge: With Lenovo and ZEDEDA

Listen on:

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Computer vision has already cemented its place in smart manufacturing and industrial IoT solutions. But to unlock the full potential of these applications, more and more processing needs to be moved to the edge. The problem is that there is no one-size-fits-all approach when it comes to edge computing.

In this podcast, we will talk about the different “flavors” of edge computing, how to successfully deploy AI at the edge, and the ongoing role of the cloud.

Our Guests: Lenovo and Zededa

Our guests this episode are Blake Kerrigan, General Manager of the Global ThinkEDGE Business Group at Lenovo, a global leader in high-performance computing, and Jason Shepherd, Vice President of Ecosystem at ZEDEDA, a provider of IoT and edge computing services.

Blake has worked in the industrial IoT space for many years. Prior experience includes companies like Sierra Wireless and Numerex, where he was responsible for product delivery, customer success, and solution development and delivery. At Lenovo, he and his team are focused on edge computing, go-to-market, product development, and product strategies.

Jason has a proven track record as a thought leader in the IoT and edge computing space. Before joining ZEDEDA, he worked for Dell Technologies as a Technology Strategist, Director of IoT Strategy and Partnerships, and CTO of IoT and Edge Computing. Additionally, he helped build the Dell IoT Solutions Partner Program, which received the IoT Breakthrough Award for Partner Ecosystem of the Year in 2017 and 2018.

Podcast Topics

Jason and Blake answer our questions about:

  • (2:50) Recent transformations in the manufacturing industry
  • (5:04) The role of edge computing in industrial IoT solutions
  • (6:52) Successfully deploying AI at the edge
  • (10:20) The tools and technologies for edge computing
  • (18:55) When to use (and not use) the cloud
  • (23:05) Having the proper AI expertise and IT support
  • (31:11) Future-proofing manufacturing process and strategy

Related Content

To learn more about edge computing in manufacturing, read Cloud Native Brings Computer Vision to the Critical Edge. For the latest innovations from Lenovo and ZEDEDA, follow them on Twitter at @Lenovo and @ZededaEdge, and LinkedIn on at Lenovo and Zededaedge.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

 

Transcript

Christina Cardoza: Hello, and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Associate Editorial Director of insight.tech. And today we’re talking about edge computing in industrial environments with Jason Shepherd from ZEDEDA and Blake Kerrigan from Lenovo. But before we jump into our conversation, let’s get to know our guests. Jason, I’ll start with you. Welcome to the show.

Jason Shepherd: Thanks for having me.

Christina Cardoza: Yeah. Thanks for being here. What can you tell us about ZEDEDA and your role there?

Jason Shepherd: So ZEDEDA is—we’re all focused on orchestration of edge computing—so, management and security, remotely, of assets out in the field, deploying applications, understanding the state of the hardware. Take the data center principles and extend them out as far as you can into the field to enable cloud data development, while also supporting legacy assets. I lead our ecosystem, so I work a lot with strategic partners, I work a lot with industry consortium serving as our field CTO. And one of my mottoes is, “If it’s fuzzy, I’m on it.” I always find myself on the front end of emerging technologies, so, hence edge right now.

Christina Cardoza: Great. Can’t wait to dig a little bit deeper into that. And Blake, thanks for joining us today.

Blake Kerrigan: Yeah, thanks for having me.

Christina Cardoza: So what can you tell us about Lenovo and what you’re doing there?

Blake Kerrigan: Well, look, I think most people know who Lenovo is—as you know, one of the largest personal compute, and mobile compute, and data center compute hardware providers in the world. But my role essentially here at Lenovo is I manage our edge computing practice.

So here at Lenovo we’re hyperfocused on digital transformation as a whole, for most enterprises. And we feel that edge computing is essentially core to our customer’s journey. And so I’ve been here for about three years. I’m based in Raleigh, North Carolina, and me and my team are uniquely focused, not just on edge computing, but also defining what is our strategy as a company. You know, how do we develop products differently for use cases outside of traditional data center or personal compute. So mainly go-to-market, product development, and product strategy.

Christina Cardoza: Perfect. I love how you mentioned how edge computing is part of a manufacturer’s digital transformation journey. I think that’s the perfect place to kick off this conversation today. No surprise to you two that the manufacturing space has been rapidly evolving over the last couple of years to keep up with the demands of the digital era. So Blake, I’m wondering if you can walk us through what some of those transformations in manufacturing have looked like recently?

Blake Kerrigan: Well, I think recently, I think they look a lot different just even in the last two years—things have had to change quite a bit, you know. Lenovo being a large manufacturer, this is a space that’s very close to home for us. You know, probably some of the largest trends that we see is around are computer vision and AI use cases.

So, for the last, probably 15 to 20 years, I think most industrial customers have been uniquely focused around automation—whether it’s a simple process around manufacturing or some sort of a logistics optimization or automation process.

And today, what we’re starting to see is the use of AI on a more binary state in terms of how do you create more efficiencies in some of those processes that already exist. But when you lump computer vision applications and solutions on top of that, we’re starting to see unlocking all sorts of new insights that beforehand we didn’t really have a way to capture some of these insights with some of the sensor technology that existed in the world.

So some of those trends that I see a lot in manufacturing and even in distribution is things like defect detection—there’s all sorts of different safety applications. Usually these were done as kind of point solutions in the past, and with the adoption and transition from more purpose-built compute to general-built compute for AI and computer vision, we start to see a lot of unique types of solutions that we’ve never seen before, and they’re getting easier and easier to adopt for our customers.

Christina Cardoza: That’s great. Here at insight.tech, we’ve definitely been seeing all of those use cases, and the opportunity with computer vision and AI just expanding those opportunities for manufacturers. Traditionally they’ve been taking all that data and processing it in the cloud, but what we’ve been seeing is even that’s not enough, or not fast enough, to get that real time insight and to make informed decisions. So, Jason, can you tell us more about why edge computing is playing a role in this now?

Jason Shepherd: Well, I mean, the only people that think that sending raw video directly to the cloud are people that sell you internet connectivity. It’s very expensive to stream, especially high res video straight over a wide area connection. So, clearly with computer vision, the whole point at the edge is to look at live camera streams or video streams. It could be thermal imaging, it could be any number of things, and look for an event or anomalies in the moment, and only trigger those events over those more expensive connections.

I mean, it goes from manufacturing through of course all different types of use cases, but it used to be where you got someone just sitting there looking at something, and then call somebody if something happens. And now you can have it being continuously monitored and have the intelligence built in to trigger it.

So edge is key there. I mean, the same thing with, like, 5G is a big trend in manufacturing. I know we’re talking about computer vision now, but every new technology is like, “Oh, you know, this is short lived.” Well, 5G actually drives more edge computing too, because you’ve got a super, super fast local connection, but the same pipe upstream. And so we’re going to see more use cases too, where you mash up these kinds of private 5G small cells in a factory with computer vision. And then of course other sensing technologies. But yeah, we’re just kind of at the beginning of it as it pertains to edge, but there’s just so many possibilities with it.

Christina Cardoza: It’s funny that you mentioned the only people that talk about processing in the cloud are the people that it would benefit most, but I think that’s also a real issue in the industry, is that there’s so many people telling you so many different things, and it could be hard to cut through all of the noise.

So Jason, can you walk through what are some of the challenges that manufacturers are facing when going on an edge computing journey, and how can they successfully move to the edge?

Jason Shepherd: Yeah. I mean, there’s kind of in general, like you said, it’s the hammer-nail syndrome. Everyone tells you that they can do everything. I mean, edge is a continuum from some really constrained devices up through kind of on-prem or on the factory floor—say the shop floor up into sort of metro and regional data centers. Eventually you get to the cloud, and where you run workloads across that continuum is basically a balance of performance costs, security, and latency concerns. And I think people are first and foremost are just confused about what is the edge. It’s just this—there’s a lot of edge washing going on right now. And whoever the vendor is, what they sell and where they sell it, that’s the edge.

So for—I think for manufacturers, first and foremost, it’s kind of understanding that it’s a continuum, understanding there’s different trade-offs inherently. If you’re in a secure data center, it’s not the same as if you’re on the shop floor, even though you want to use the same principles in terms of containers and VMs and things like that. Security needs are different. So, it’s concerns around getting locked in. You know, everybody loves the easy button until you get the bill. So the whole thing with the clouds model is, make it really easy to get data in, but then very expensive to get data out or send it somewhere else. And so that’s why we’re seeing this kind of—that’s another reason why we’re seeing this shift. It’s not just about bandwidth and latency and security and all the reasons you see ads.

So long story short, just navigating the landscape is the first problem. Then when you get into actually deploying things, I mean, these always start with a use case, say, I’m doing, trying to do, quality control or improved worker safety or whatever using computer vision. It always starts with a POC—I figure out a use case, then I’m doing a POC. At this stage, people aren’t thinking about management and security and deploying in the real world. They’re thinking about an app, and we see a lot of experimentation with computer vision applications, and there’s really cool innovation happening. But to take the lab experiment into the real world, that’s also really challenging—camera angles change, lighting changes, contexts switch. Just getting the infrastructure out there and the applications and continuously updating those models remotely. These are those infrastructure things that I think are really important.

I think that the main thing is to break down the problem, separate out your investments in infrastructure from the application planes that you invest in—consistent infrastructure like we’re doing with Lenovo and Intel® and ZEDEDA. We’re obviously focused on infrastructure and building it in a modular way. So, whereas you kind of evolve, you can build new applications, you can take in different types of domain expertise. Eventually it’s about domain expertise with consistent infrastructure. And so I think the key for manufacturers is to break down the problem, work with vendors that are architecting for flexibility, and then you can evolve from there, because no one knows all the answers right now. You just want to build in that future proofing.

Christina Cardoza: That’s a great point that you make: edge is a continuum. There’s no one-size-fits-all approach. There’s no one way of doing manufacturing. Everyone’s building different things and applying technology in different ways. So on that note, Blake, can you talk about how manufacturers can approach this, what they need to be looking at, and how they decide what technologies or path is going be the best for them?

Blake Kerrigan: Yeah. You know, I mean, the first approach I think is—well, even before you get to the POC, I think the biggest challenge is just understanding what kind of business outcome you want to drive with the particular POC, because you also have to scale the business case.

And one of the challenges is, you can build something in a lab and typically the last thing an engineer’s going to think about is cost when they go to develop or deploy the solution. It’s an exponential factor and it’s—in my opinion, and I’m sure Jason would agree with me, that the biggest inhibitor to scale is deployment and management and life cycle and end of life and transitioning from one silicon to another over time as products come in and out of their own life cycles.

So I think the first step is making sure that you understand what kind of business outcome you want to drive, and then keeping a conscious understanding of what the costs are associated with that. And that’s something that we at Lenovo—we work with people more on solution architecture and thinking about what type of resources do you need today? And then, how does that scale tomorrow, next week, and next year, and the next five years? So that’s critical.

I also think it’s important to understand that, at least in this edge computing spectrum today, there’s a wide array of different types of resources or hardware platforms that you could choose from, some of which may have better performance. Others may have better longevity or reliability in some terms, but I think it’s important for a customer to understand that, in order to select the right hardware, you kind of have to understand what are the iterations of the program throughout the life cycle of whatever solution you’re trying to implement.

So those are the first things, and all what I would call more fundamentals when you approach some of these new solutions. You know, there’s a lot of tools out there because if you think about it PCs today or personal computers that maybe Lenovo sells in our core commercial business are based on user personas. So, if you’re an engineering student or professional, you may use a workstation machine with great graphics, some good performance. If you’re a mobile executive you’re probably using a ThinkPad and traveling around the world—you need that mobility. Or if you’re a task-based worker you might have a desktop computer.

In edge computing there are no personas, and the applications are endless. And I would say I think ZEDEDA is proof that there is no standard ecosystems of applications. So you have to be able to build in that elasticity, and you can do that with ZEDEDA and Lenovo, frankly.

Christina Cardoza: Now I want to expand a little bit on some of those—the hardware and software platforms that you just mentioned. Jason, can you talk a little bit more about how you deploy AI at the edge and how you approach edge computing? What tools and technologies are you seeing manufacturers using to approach this?

Jason Shepherd: There’s obviously a lot of kind of special-built, purpose-built solutions, vertical solutions. Any new market, I always say it goes vertical before it goes horizontal. It is, as I mentioned about domain knowledge. And it’s attractive upfront to buy, like a turnkey solution that has everything tied together and someone that knows everything you need to know about quality control who does everything for you.

And there’s been computer vision solutions for a long time that are more proprietary—kind of closed systems for things like quality control on the factory floor. That’s not new, but what’s new is everything becoming software defined where you abstract the applications from that infrastructure. So if you look at in terms of tools, if you look at historically—I mean, constrained devices, really, really low-end compute power sensors, just kind of lightweight actuators, things like that—those are inherently so constrained that they’re embedded software.

In the manufacturing world, control systems have historically been very closed. And that’s a play to create stickiness for that control supplier. And of course there’s implications if it’s not tightly controlled in terms of safety and process uptime, okay? So that’s kind of like the world that it’s been for a while.

Meanwhile, in the IT space we’ve been kind of shifting, and the pendulum swings between centralized and decentralized. Over the past 10 years we’ve seen the public cloud grow. Why do people like public cloud? Because it basically abstracts all of the complexity, and I can just sign up and just—I’m looking at resources, compute storage, networking, and just start deploying apps and go to town.

What’s happening with edge and the way we’ve evolved technologies and the compute power that’s being enabled, of course, by Intel and all the portfolio, like from a Lenovo, is we are able to take those public cloud elements—this platform independence, cloud-native development, continuous delivery of software, always updating and innovating—and we’re able to use those tools and shift them back to the edge. And there’s a certain footprint that you can do this with. And it goes all the way to the point where basically we’re at the point where we’re taking the public cloud experience and extending it right to the process, right above the manufacturing process that’s always been there, to where now we can get that public cloud experience, but literally on a box on the shop floor. I don’t need to one-size bootstrap everything I’m looking at; I want to deploy an AI model, I want to sign it to this GPU, I want to add this protocol-normalization software. I want to move my SCADA software and my historian onto the same box. It’s this notion of workload consolidation. It is using these tools that we’ve developed, the principles from the public cloud, but coming down.

Now what we do at ZEDEDA, what’s different is while we help expand those tools from a management standpoint, a security standpoint, we have to account for the fact that even though it’s same principles, it’s not in a physically secure data center. We have to assume that someone can walk up and start trying to hack on that box. When you’re in a data center, you have a defined network perimeter. We have to assume that you’re deployed on untrusted networks. So the way that our solution is architected, and there’s a bunch of different tool sets out there, is take the public cloud experience, extend it out as far as you can, to where basically it starts to converge with historical process, stuff in the field, but you build a zero-trust model around it to where you’re assuming that you’re not locked up in a data center. When you’re outside of the data center, you have to assume you’re going to lose connectivity to the cloud at times. So you’ve got to be able to withstand that. So this is where the one-size-fits-all thing doesn’t come into play.

There’s great data center tools out there for scaling. They’re evolving, with Kubernetes coming out of the cloud and down, but they start to kind of fall apart a bit when you get out of a traditional data center. That’s where solutions that we’re working on, and with the broader community pickup, then eventually you get into constrained devices, and it is inherently death by a thousand cuts. Everything’s custom. And so those tools there, and then of course, there’s—we’ll talk a little bit about some of the frameworks and kind of the AI tools, but as Blake mentioned, I’m very much stressing when you get to the real world, this foundational infrastructure, this notion of how do you manage the life cycle. How do you deploy it? Oh, and how do you do it without a bunch of IT skillsets running around everywhere. Because it’s not—you don’t have those skills everywhere. It’s going to be usable and definitely secure—and security usability is another big one. Because if you make it too locked down no one wants to use it. Or they start to bypass things. A lot of stuff. But I think the key is the tools are there—you just need to invest in the right ones and realize that it is that continuum that we’re talking about.

Christina Cardoza Now I want to touch on some points you made about the cloud. The cloud isn’t going anywhere, right? And there may be some things that manufacturers want to do that may not make sense to do at the edge. So Blake, can you talk a little bit about the relationship between cloud and edge computing? What the ongoing role of cloud is in edge computing, and what sort of makes sense from a manufacturer’s perspective to do in the cloud and to do at the edge?

Blake Kerrigan: Yeah. I mean, in the line of what Jason was just talking about, we kind of see—ultimately edge will—essentially, in a virtual world we become an extension of the cloud. You know, the cloud means a lot of different things to a lot of different people, but if we’re talking about major CSP, or cloud service provider, I think the central role that they’ll play in the future is probably more around—I mean, obviously with edge computing, it’s all about getting meaningful, insightful data that you would want to either store or do more intensive AI on—which may happen in a hyperscale data center when the data gets so big and it can’t be crunched. But essentially what we are doing is trying to comb down the amount of uneventful or uninsightful data.

But I do think once you get the meaningful data in the cloud—if, as an example, we were talking about defect detection, once you have enough information from—let’s say you have 50 different plants around the United States and every single one of them has a defect detection, computer vision application running on the factory floor, well ultimately you want to share the training and knowledge that you have from one factory to another. And the only real practical way to do that is going to be in the cloud.

So for me, there’s really two main purposes. The first one is really around orchestration. So, how can I remotely orchestrate and create an environment where I can manage those applications outside of the onsite, or out not at the edge, or in the cloud. And then the other one is, in order to make these models better over time, you do have to train them initially. That’s a big part of AI and computer vision that’s, back to our earlier point, probably woefully underestimated in terms of the amount of resources and time that it takes to do that.

One of the most effective ways to do that is in collaboration in the cloud. So I do think there’s a place for the cloud when it comes to edge computing and, more specifically, AI at the edge, in the form of crunching big data that’s derived from edge-computed or edge-analyzed data. And then the other side of that is training of AI workloads to be then redistributed back to the edge to become more efficient and more impactful, more insightful to the users.

Jason Shepherd: Yeah, definitely. One way I would summarize it is there’s kind of three buckets. One is cloud centric where it’s maybe I’m doing light preprocessing at the edge, normalizing IoT data. And then I’m kind of—so I’m doing a lightweight edge computing, so to speak, and then I’m doing a lot of the heavy crunching in the cloud. So that’s one. Another one Blake mentioned, it’s where I’m using the power of the cloud to train models. And then I’m deploying, say inferencing models to the edge for kind of local action. You know, that’s kind of like cloud-supported or cloud-assisted model. And then there’s like an edge-centric model where I’m doing all the heavy lifting on the data. Maybe I’m even just keeping my data on prem, you know. I might still be kind of training in the cloud or whatnot, but maybe then I just do orchestration from the cloud because it’s easier to do that over wide areas and remote areas, but the data still stays in location so that maybe I’ve got data sovereignty issues or things like that.

So it’s exactly what Blake said. It’s not kind of one size fits all, but that’s one framework to look at: where is the centricity in terms of the processing? And, of course, the cloud helps support it. I mean, we always say at ZEDEDA, the edge is the last cloud to build; it’s basically just the fringes of what the cloud is. It’s a little abstract, or just becoming more gray.

Christina Cardoza: Now, going back to a point you made earlier, Jason, manufacturers don’t always have the IT staff on hand or the IT expertise to do all of this. So I know there’s no silver bullet tools out there, but are there any tools and technologies that you can mention that may help them on this journey, especially if they’re lacking the dedicated IT staff that it takes to do all of this?

Jason Shepherd: Is a fair answer, ZEDEDA?

Blake Kerrigan: That’s what I was going to say.

Jason Shepherd: I mean, you know, let’s face it. So like, again, there’s a lot of people that have the domain knowledge, the experts are the folks on the floor and whatnot— it’s not the folks that do the data center. I mean, everyone’s experts in their own right. But, and that’s why a lot of these different tool sets as they become more democratized—I mean, you look at public cloud, it’s attractive because I can sign up and I might not know anything about IT, but I can start playing with apps and maybe I start getting into the OpenVINO community and working with that community. I mean, there’s a lot of resources out there for just those initial experimentations. But when you get into trying to deploy in the real world, you don’t have the staff out there that’s used to scripting and doing data center stuff and all that. Plus, the scale factor is a lot bigger. That’s where tools like—why we exist is to just make that much easier and, again, to give you the public cloud experience, but all the way down out into the field, delivering the right security models and all that.

You know, there’s a lot of other tools just in terms of, we’ll talk more about OpenVINO, but you know, there’s the whole low-code, no-code platform solution. It’s, it really is about finding the right tools. And then applying domain knowledge on top. A friend of mine used to work on factory floors and doing kind of from the IT space. And you bring all the data science people in and the AI frameworks and yada yada, and then you’ve got, like, the person that’s been on the factory floor for like 30 years, that knows, “Okay, when this happens, yeah, it’s cool. Don’t worry about it. Oh, that’s bad.” And so literally they brought these people together and the data scientists had to be told by the domain expert, “Well, here’s how you program it,” because they don’t know about the domain stuff. And literally at the end, they called it “Bradalytics” —the guy’s name is Brad. And so we got Bradalytics on the floor. It’s important to bring those right tools that simplify things with the domain knowledge.

Christina Cardoza: Now you mentioned OpenVINO. I should note that insight.tech and IoT Chat are Intel publications. So Blake, I want to turn the conversation to you a little bit since Jason mentioned ZEDEDA, learn a little bit more about where Lenovo fits in this space, but also how you work with Intel, and what the value of that partnership has been.

Blake Kerrigan: Yeah, look, the value of the relationship goes beyond just edge computing, obviously. I mean, the—one of our—Intel is our biggest and strongest partner from a silicon perspective when it comes to edge computing. It’s interesting because Intel holds a lot of legacy ground in the embedded space, the industrial PC space, which just is more or less just a derivative of, an evolution of—. But one of the things in working with Intel, a couple things that come to mind—one of which is “works with,” right? So, most applications, most ISVs, most integrators are familiar with x86 architecture and have worked with it for years. So that’s one thing.

The other side of it is Intel continues to be at the cutting edge of this. They continue to make investments in feature functions that are important at the edge and not just in data center and not just in PC. Some of those are silicon based, whether we’re talking about large core, small core architectures or we’re thinking about integrated GPUs, which are extremely interesting at the edge where you have constraints on cost, more specifically.

Some of the other areas where I feel like our customers understand that “better together” story is really around, number one, OpenVINO. So if you are trying to port maybe AI workloads that have been trained and developed and on some sort of a discreet GPU system which isn’t really optimized to run at the edge, you can port these AI applications over to and optimize them for maybe an integrated GPU option like you have with Intel. So that’s very important from a TCO and ROI perspective.

I talked earlier about what kind of outcome you want to derive. What’s typically driven by cost or increase in revenue or increase in safety. And in order to do that, you have to be extremely conscious of what those costs are. You know, not just with the deployment, but also in the hardware itself. And another part of—I guess OpenVINO sits within this larger ecosystem of tools from Intel, and the one of the ones that I really like, because it helps our customers get started quickly, is Intel DevCloud. And what that essentially allows us to do is instead of sending four or five different machines to a customer, we let them get started in a development environment that is essentially cloud based. This could be cloud work, could be an on-prem depending on what type of sovereignty issues you might have, or security requirements. But this allows a customer to basically emulate, if you will, and do almost real-world scenarios. So they can control all sorts of different parameters and run their applications and their workloads in this environment. So, obviously that creates efficiencies in terms of time to market or time to deployment for our customers.

You know, once our customer can use some of these tools to become ready for that first POC, they go through the POC and they realize those objectives, the Lenovo value proposition is pretty straightforward. We provide very secure, highly reliable hardware in over 180 markets around the globe. There are very few companies in the world that can make that statement. And that’s what we’re trying to bring to the edge computing market, because we understand our customers are going want to deploy systems in unsecure or very remote places. And that’s why edge computing, Lenovo’s DNA, is—lends itself to be a real player in this edge computing space. So when you think about, when I get to scale and I want to deploy hundreds of factories of thousands of nodes, hundreds of different AI models, you’re going to want partners that can provide things complete zero root of trust provisioning, all sorts of—you’re going to want to make sure they have a trusted supplier program, or transparent supply chain in other words. And then you’re also going to want a partner that can help you with factory imaging, making sure that we can provide the right configuration out of the factory so you don’t have to land products in a landing zone for the imaging of the product, either within your own company as a manufacturer or maybe some third party who, as expected, would want to create a business around just imaging your machine. So, with Lenovo we want to be able to create the most frictionless experience for a customer who is trying to deploy infrastructure at the edge, which is why Lenovo and ZEDEDA really complement each other in our alignment with Intel.

Jason Shepherd: Yeah. I’ll say that we’re basically a SaaS company—it’s all software, but coming from the hardware space. I can be the first to say hardware is hard. And so partnering with Lenovo makes that simple, I mean, especially now with the supply chain things that we’ve been going through and all that, it’s, you got to find a trusted supplier that can help simplify all that complexity. And of course make things that are reliable. I mean, we see a lot of people throwing Raspberry Pis out in the field, but it’s, like, sure, it was a hundred bucks, but once you drive a truck out, you just spent a thousand. But yeah, I think it’s important to work with people that are building reliable infrastructure.

Christina Cardoza: Now, big point you made at the beginning Jason, was customers are afraid of getting locked into a particular technology or a vendor. So when you’re choosing tools and partners, how do you make sure it’s going not only meet the needs you have today, but be able to scale and change as time goes on?

Jason Shepherd: Yeah, I mean, I think that’s kind of going back to some of the things we’ve touched on, this shift from proprietary systems to more open systems. We’ve seen this throughout technology. I mean, it used to be plain old telephone systems—you know, POTS—then all of a sudden we get “void”, that’s that transition from facilities to more IT led, but working together—CCTV to IP-based cameras. We’re in that transition period now, where we’re taking these proprietary, purpose-built technologies, and we’re democratizing them and you’re opening them up.

And so one way to avoid getting locked in is, as we’ve been saying, is to separate the infrastructure plane from the application plane. You know, once you get tied into a full vertical stack, sounds great. You’re doing everything for me, from analytics to management and security and whatever. You just got locked in. But if you decouple yourself with edge infrastructure, the moment—as close to the data as possible, this is why ZEDEDA uses an open foundation.

Of course, the Lenovo portfolio is agnostic to data or to application stacks—super flexible. If you decouple yourself from a cloud as close to the source of data, you’re a free agent to send your data wherever. If you decide to go to one public cloud, great, have at it. But once you get the bill, you’re probably going to want to figure out a multicloud strategy. And so that’s one key.

The other thing is communities. We mentioned OpenVINO, this notion of democratizing technologies by working in communities, so the OpenVINO community, of course. Then there’s Onyx, which I know you know—the OpenVINO community is working with Onyx about, how do I standardize how AI frameworks work together, like a TensorFlow and on OpenVINO, et cetera. The root of our solution, we sell a SaaS orchestration cloud for edge computing, but we use EVE-OS from LF Edge. Linux Foundation’s LF Edge community is democratizing a lot of the kind of middleware of the plumbing for edge computing. By investing in those technologies it not only reduces the undifferentiated heavy lifting that so many people too often do, it helps you focus on value.

So as all of these technologies start to converge, we’re going see more and more acceleration and transformation. And the key is to not feel like you should be inventing the plumbing. The worst thing you could do right now is to be focused on trying to own the plumbing. The plumbing is going to be—and I always say, you have to democratize the south, like towards data, to monetize the north. And that’s where the real money is.

And so we work a lot with Intel, just on other—quick point is we really like Intel’s infrastructure, because I mentioned this whole notion of moving the public cloud experience as close to the physical world as possible. We leverage all the virtualization technologies within the silicon. You know, we can basically abstract all of the application layer using our solution to where, as a developer, I don’t have to have a lot of skills. I can just—I’m going deploy that AI model and assign it to that GPU. I want this data-analytics or data-ingestion stack to be assigned to those two Intel CPU cores. And so it gives you that sort of, again, that public cloud experience. I don’t care about—all I care about is compute storage, networking, and just make—gimme the easy button to assign stuff. So we see that OpenVINO, as mentioned, but it really is important to do all the abstraction, but also invest in communities that are doing the heavy lifting for you. So then you can focus on value.

Christina Cardoza: Great. Unfortunately we are running out of time, but before we go, I just want to throw it back to you guys one last time for any final key thoughts or takeaways you want to leave our listeners with today. Blake, I’ll start with you.

Blake Kerrigan: I think that the key takeaway for me is—and it goes back to maybe some of what Jason said, and some of what I’ve said—selecting hardware is hard, and I think a lot of people start there, and that’s probably not necessarily the first step. It’s interesting, me saying that, coming from a hardware company, but you know, at Lenovo what we want to be a part of is that first step in the journey. And I would encourage all of our customers, or even folks that aren’t our customers, to reach out to our specialists and see how we can help you understand what are these roadblocks that you’re going run into. And then also open you up to the ecosystem of partners that we have, whether it’s Intel or ZEDEDA or others, there’s all sorts of different application layers that run on top of these fundamental horizontal hardware or software stacks, like ZEDEDA as well as our hardware at Lenovo.

My takeaway for, or I guess my leave behind for this would be bring us your problems, bring us your biggest and most difficult problems and let us help you design that, implement it and deploy it, and realize those insights and outcomes.

Jason Shepherd: Yeah. I mean, I would just add, as we close out, it’s to totally agree. It’s all about ecosystem, invest in community so you can focus on more value. You know, the “it takes a village” mantra and, for us, if you do all the abstractions and you create this more composable software, definable infrastructure. It’s like another mantra of mine is, I’m all about choice, but I’ll tell you the best choices. So then it’s like, okay, if you come to me, if we work together to architect it right, then we can kind of focus on what are the right choices, both open source and of course proprietary.

This isn’t about a free-for, this is about making money and helping customers and new experiences and all that. But very much it’s about partnership, like we’re talking about here. But then also to navigate the crazy field out there, but also focus on real value versus reinvention.

Christina Cardoza: Well, with that, I just want to thank you both again for joining the podcast today.

Jason Shepherd: Yeah, great. Thanks for having us.

Christina Cardoza: And thanks to our listeners for tuning in. If you liked this episode, please like, subscribe, great review, all of the above, on your favorite streaming platform. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.