The Power of Location Intelligence with AI and Digital Twins

Tracking and monitoring of the environment—or location intelligence—is pretty ubiquitous in our lives nowadays, from security cameras to backup assist in the car. But businesses are just starting to understand the possibilities inherent in matching location intelligence with technologies like AI and digital twins. And if the idea of the digital twin still seems a little fantastical, fasten your seatbelt. We’re about to take it into the next dimension: time. Because when a particular asset is understood within a particular moment, monitoring and spatial awareness can affect way more than just security or defect detection.

Of course, with great technological advancement comes great technology. So Tony Franklin, General Manager and Senior Director of Federal and Aerospace Markets at Intel, is well positioned to explain the whole concept of location intelligence: the challenges it can solve—both now and in the future—as well as the technology designed to help make it all happen, including the Intel SceneScape platform (Video 1).

Video 1. Tony Franklin, General Manager and Senior Director of Federal and Aerospace Markets at Intel, explains the importance of location and intelligence and the use of AI and digital twins. 

How are you seeing the concepts of digital twins and location intelligence being used?

I think we’re all really used to location intelligence without even knowing it’s there. Everyone has Google Maps on their phone. Anyone who’s had children knows about the Life360 app: You know exactly where someone is, how long they’ve been there, how fast they’re moving.

But on the business side, we’re just starting to understand how impactful location intelligence can be from a financial point of view. So for a shipping company like UPS, if their locations aren’t accurate in getting from point A to point B, it could cost them many millions of dollars. It’s also important for things like sustainability. I read recently that 27% of gas emissions in the US are from transportation.

And, in addition to location intelligence, I think what we’re starting to really understand is time-based spatial intelligence. It’s not just about location; it’s whether we really understand what’s going on around us, or around that asset or object or person, in a particular moment. Digital twins allow you to re-create the space and then also understand the particular time—both real time and, if you need to hit the rewind button and do analysis, you can do that also.

What’s also valuable about digital twins is that there’s a naturally created abstraction. We know that it’s a digital replica of the real world, and so analysis is being done on the replica, not on the actual data coming in. And that digital replica can then make the data available to multiple applications.

You do need to use standards-based technology when there are multiple applications and different types of AI, because you may need one type of AI to identify certain animals or people or assets, and another to identify different cars or weather or more physics-like models.

What challenges are businesses facing that location intelligence can address?

I think one of the biggest challenges is siloed data coming from different applications. For example, we have a ton of applications that work together on our phones, but it doesn’t mean the data on the apps works together.

In the business world there might be an app to monitor physical security, but another app to monitor, say, the robots in a factory. They all have cameras, they all have sensory data, but they’re not connected—all the data is sitting in different silos. So how do you connect that data to increase situational awareness and make better decisions? And better decisions ideally mean either saving money, having an opportunity to make money, or creating some other value like a safer environment.

“AI and the integration of these technologies and sensor data is so important. It allows these systems to be more intelligent and to actually understand the environment” – Tony Franklin, @intel via @insightdottech

Another challenge is just the need for a mental shift. A lot of the technology we’re already using comes from games. Video games are so realistic these days, and in games you can see everything in your 3D environment. You know location; you have multiple kinds of sensory data coming in—sound or environmental. And all of that is integrated into the experience. So more and more we are starting to want to incorporate that into our day-to-day lives and into business as well.

How is Intel helping businesses implement digital twins and AI?

There’s always a ton of data involved that needs to be labeled to make it available, and we have lots of tools to connect this all together. If we’re talking streaming data in real time, there’s the Intel® Distribution of OpenVINO toolkit, which allows you to apply inference and also to pick the best compute technology for the particular data coming in.

So you’re bringing this data in, applying inference, continuing a loop. Then the Intel® Geti platform allows you to train the data. And it allows you to do it quickly instead of needing—if we’re talking images for computer vision—thousands and thousands of images. And no one needs a PhD in data science, either. That’s what Geti is for.

In the middle we have something called Intel® SceneScape. Like Geti, SceneScape is intended for end users. Think of it as a software framework sitting in the middle of OpenVINO and Geti to really simplify the creation of the digital twin, to make sense of the data you have, and to make that data available and usable in an impactful way. It allows the end user to easily implement AI technology in an open, standard way and to leverage the best computing technology underneath it.

So, the sensor data comes in. OpenVINO will then apply inference for object detection or classification, for example. You can use Open Model Zoo—a range of models from all the partners we work with—and implement that model with SceneScape. Then you use Geti to train the data.

SceneScape also allows you to use any sensor for any application to monitor and track any space. We’re so used to video, but there are other sensors that allow you to increase situational awareness for your environment. You could have LiDAR—all the electric and autonomous vehicles have that—or environmental, temperature, radiation, or sound sensors, as well as text data.

Can you share any case studies of Intel® SceneScape in action?

One commonality to the customers that have been using SceneScape is the need to understand more about their environment—either the environment they’re in or that they’re monitoring—and to connect the sensors and the data and make that data available. They want to increase the use of that data and gain more situational awareness from it.

So think about an airport. There’s a need to track where people are congregating, to track queue times, etc. When we were in the early stages of Covid, there was a need to track bodily measurements with forehead sensors. Airports have spaces that are already being monitored, but now they need to connect the data. The sensor that’s looking at the forehead generally isn’t connected to the cameras that are looking at the queue line. Well, now they need to be.

It builds relationships between data points: You see this person and see that they’ve been in line for 30 minutes, but you also see that they have a high temperature and they’re not socially distanced. Or you see that this person was with a bag and was moving with the bag, and now the bag is sitting stationary, but the person kept moving.

And you’re not just looking at Terminal A, Gate 2, if you will. You need all the terminals and all the gates, and you need to see it in a single pane of glass. That’s one of the benefits that SceneScape provides.

How does Intel® SceneScape address privacy concerns?

Privacy is absolutely important. But we’re just looking at detecting the actual object—is it a person, is it a thing, is it a car? We want to identify what that is, we want to identify distance, we want to identify motion. We don’t actually do facial recognition or anything like that. We’re inferring the data but then allowing the customers to implement what they chose for their particular application.

Where do you think this space is going next?

One of the use cases I’m waiting for is the patient digital twin. Now you’ve got different medical records in different places. Historical data isn’t being used with real-time data, or being used against the reams and reams of medical history across many patients that could apply to me. So I would love to see a patient digital twin that’s constantly being updated; that would be ideal.

But how about just tracking medical instruments? Before surgery there were 10 instruments, and you want to make sure that there are still 10 instruments when the surgery is over—that they’re not inadvertently left somewhere they shouldn’t be.

So there are immediate applications that can help with business operations today, as I’ve already talked about. And then there are the future-state ones that I think we’re all waiting for, where I want my patient digital twin.

I think as companies start to realize that they can de-silo their data and make relationships or connections between the data and the systems they have across a range of applications—not just in one room, not one floor, not one building, but maybe across a campus—they can start to get actual value that can impact their bottom line—they can make more money, they can save more money.

Are there any final thoughts of key takeaways you want to leave us with?

Think about traffic as a use case; location intelligence could help save lives. And we are seeing customers look at SceneScape with this application. Many cars today have camera sensors—backup sensors or front cameras—and most intersections have cameras. But today they don’t talk to each other.

Well, what if there’s a car that’s coming up at speed, and there’s also a camera that can see pedestrians coming around a blind spot. I want the car to know that and to start breaking automatically. Right now most cars coming up on another car too fast will automatically start breaking. But they can’t do that with a person if they don’t know that that person is coming around the corner, because they can’t see it. Or, if the camera can see it, they don’t necessarily know how far away the person is or how fast the car is going.

As humans, we get into a car and we know how fast it’s going; we know if somebody’s coming. And we take the way our brains understand that for granted. But cameras don’t understand that. So that’s an application that can be applied today, and some cities are actually looking at those types of applications.

And that’s why AI and the integration of these technologies and sensor data is so important. It allows these systems to be more intelligent and to actually understand the environment. Again, time-based spatial intelligence: distance, time, speed, relationships between objects.

And that’s exactly what we’re working on—working with the large ecosystem Intel has to make it easy for companies to implement this technology. It’s an exciting time, and we’re looking forward to helping companies make a difference.

Related Content

To learn more about the importance of location intelligence, listen to Gaining Location Intelligence with AI and Digital Twins and read Monitor, Track, and Analyze Any Space with Intel® SceneScape. For the latest innovations from Intel, follow them on Twitter @intel and on LinkedIn at Intel Corporation.

 

This article was edited by Erin Noble, copy editor.

Bringing Industrial IoT Devices to Rugged Environments

IoT has permeated nearly every industry as businesses seek better access and insights into data and operations. But not all IoT devices are well suited for the rugged and sometimes harsh environments they are needed in—both outdoor and indoor. For example, a business that wants to deploy an IoT device in the field to monitor assets and detect anomalies must consider factors like the range of weather conditions and temperatures that the device may encounter.

Fortunately, recent advancements in technology have made it easier to develop ruggedized industrial IoT devices that can reliably and efficiently operate in harsh environments. In this podcast, we will explore the recent advancements that have made rugged IoT a reality, the types of environmental concerns and risks that these devices must withstand, and the best approach for implementing rugged IoT devices in the field.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guests: ASUS, OnLogic, and Crystal Group

Our guests this episode are Jason Lu, Product Manager of IoT at ASUS, a leader in computing hardware; David Zhu-Grant, Senior Product Manager at OnLogic, a global industrial PC manufacturer; and Chad Hutchinson, Vice President of Engineering at Crystal Group, a leading designer of rugged computing hardware.

Prior to joining ASUS, Jason worked across industries including embedded systems, consumer electronics, PC/networking, and digital TV.

David has a background in Electrical Engineering and prior to his role at OnLogic spent more than eight years leveraging product development innovation and growing revenue for a global transportation and defense company.

Chad has been part of Crystal Group for more than a decade, excelling as the Director of Engineering and Product Development with a focus in ruggedized commercial-off-the-shelf electronics before being promoted to Vice President of Engineering.

Podcast Topics

Jason, David, and Chad answer our questions about:

  • (3:08) The importance of rugged IoT
  • (7:19) Deployment challenges for harsh, outdoor environments
  • (11:24) Types of challenges faced in indoor environments
  • (12:44) Overcoming rugged IoT deployment challenges
  • (15:25) Implementing rugged IoT devices
  • (18:40) Choosing scalable and future-proofed rugged IoT capabilities
  • (22:45) Rugged IoT use cases and real-world examples
  • (30:57) Leveraging partnerships to create better rugged IoT solutions

Related Content

For the latest innovations from ASUS, OnLogic, and Crystal Group, follow them on:

Transcript

Christina Cardoza: Hello, and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re going to be talking about what rugged IoT means and how to make it a reality with a panel of guests from ASUS, OnLogic, and Crystal Group. But before we get into the conversation, let’s get to know our guests a bit more.

Jason, I’ll start with you. We have Jason Lu from ASUS. Can you tell us more about yourself and the company?

Jason Lu: Sure. Hi everyone, my name is Jason Lu. I’m the Product Director for ASUS IoT. ASUS IoT is a new business unit under a new business group called AIoT under ASUS. And we focus on industrial PC for embedded applications. I’m very happy to have this opportunity to meet you guys. Thank you.

Christina Cardoza: Yeah, absolutely, and obviously industrial IoT there. That’s going to be a big conversation today when we’re talking about rugged IoT. So before we dive in, David Zhu-Grant, from OnLogic. Welcome to the show. Please tell us more about yourself and the company.

David Zhu-Grant: Yeah, thanks for having us on. So, David Zhu-Grant here. I’m a Senior Product Manager here at OnLogic. We’re a global company that designs and builds our own industrial and rugged computers. We have a real emphasis on supporting our customers. So, the customers tell us that they like our high-quality products, customization capabilities, and excellent sales engineering. We really want to build that trust with customers, and I think it’s really important in this rugged IoT space so customers can advance their ideas with OnLogic. So, thanks for having us on.

Christina Cardoza: Great, yeah, the pleasure is all mine. And last but not least, Chad Hutchinson from Crystal Group. What can you tell us about Crystal Group, and why you guys are a part of this conversation today?

Chad Hutchinson: Well, thank you for having us on. My name’s Chad Hutchinson. I’m Vice President of Engineering here at Crystal Group. We focus on bringing commercial off-the-shelf electronics, servers, and displays, and switches and things of that nature into environments for which they were never designed to meet the needs of shock and vibration, high temp, and those things. So bringing IoT into the field is one of the things that we specialize in.

Christina Cardoza: Yeah, absolutely. And like you said, sometimes these devices are brought into environments that they were never designed for. So as more businesses want to take advantage of all the benefits and advantages and capabilities that the Internet of Things has to offer, a lot are finding that the traditional devices aren’t well suited for, like you said, the environment that they want to use them in. There’s a lot of other conditions and concerns that they have to worry about that maybe challenge them and limit them to taking advantage of IoT, but with companies like yourselves, with rugged IoT devices out there, it’s making it possible.

So, Jason, I wanted to kick the conversation off with you today. If you could talk a little bit more about when we say “rugged IoT,” what are we talking about? Why is it important to businesses, and what’s the significance of this?

Jason Lu: Yes, and I mentioned earlier IoT devices are deployed in different environments, and more and more devices are coming online. But in lots of cases, right, there are hotter or colder temperatures that the devices need to operate with.

One example, like in a McDonald’s drive-through, would need to survive below zero in wintertime, and the temperature inside that display enclosure can go as high as 70 degrees Celsius during the summertime. And the outdoor display that you see the menu—right now a lot of them are LCD, right? They would also need to be waterproof for snow or during the rain. And you also see a lot of touchscreens being used for user input. In the outdoors, the water drops may cause false touch and trigger some operation that you don’t really want, right? So in this kind of an operating environment, ruggedized IoT devices are designed to operate in that kind of environment—survive the cold and the hot temperature.

And then in a vehicle application, for example, shock and vibration now become very, very important, right? Usually with the fan, if you keep in that environment, keep bumping on the vibration, it will fail very quickly. Then you have to send a technician. So a lot of use cases you would need to go to this more ruggedized design from the beginning. This way the device will be able to survive over the life of its operation. So it’s important to kind of take that into consideration when you start a new project.

Christina Cardoza: Yeah, and I love that example with the drive-through, because I think that is something that we’ve all come across; we’ve all seen these devices out here. But you don’t really think about what goes into making it work. And they’re becoming very intuitive these days. They’re changing based on the day it is or what you’re ordering, or maybe sometimes they’re personalized to your preferences if you’re a repeating customer. So it’s interesting to see what goes on in this, because a lot of times it’s raining, there’s reasons why we do go through the drive-through, and if these aren’t up and running then you can’t really serve your customers.

But, David, I’m wondering what other types of industries or businesses do you see trying to leverage these IoT devices—if you have any other examples you can tack onto what Jason just said.

David Zhu-Grant: Yeah, definitely. And I think Jason hit the “why.” It is a pretty broad spectrum of industries that we’ve found. You see these rugged IoT devices in categories or industries like manufacturing, warehousing, logistics, smart ag or agriculture, smart cities, mining and energy—and that’s because these industries have those environments that Jason alluded to around temperature ranges—very cold, very hot, vibration, impacts or shock on vehicles. And then one other one would be they can be quite electrically noisy as well from a radio frequency–emissions perspective. So, big machinery turning on and off—that can also be a big factor for these devices. So those industries kind of capture those challenges for devices.

Christina Cardoza: Yeah, absolutely. And I think, in addition to there being the environmental concerns like you mentioned, there’s other concerns outside of the environment. Computing is getting more intensive, everybody wants high performance; that’s putting more pressure on these devices to operate.

And, Chad, I want to go back to something you said in your introduction, which really kicked us off. You know, these devices that we have been using today, they weren’t really made for environments that industries like manufacturing or some of these other businesses are looking to bring them into. So what have you seen from Crystal Group? What’s been the challenge to deploy an IoT device in some of these, call them harsher environments?

Chad Hutchinson: Yeah, so in industrial applications, the things that we see are primarily sand and dust. So, some of these environments like mining, you have a very fine coal powder that is going to get in places that can interrupt your cooling, can also interrupt electrical connections, get into to nooks and crannies and places that they really, really shouldn’t. Some of these things are very conductive because they’re metals, and that can cause shorts. So that’s one of the issues that you’re dealing with.

In the field, cooling is an absolutely huge issue. When you look at—all electronics are having more and more and more computations generally or consuming more power, which means that they’re generating more heat, makes the thermal a challenge, and the ability to cool the environment very difficult. As Jason pointed out, some of the temperatures can be upwards of 70 degrees Celsius in the environment. But also, you would think that electronics wouldn’t mind being in a cold environment. But if you get down to 40 degrees Celsius and below, electronics, commercial electronics, don’t want to operate: they don’t boot properly, voltage regulators don’t function correctly, and that can cause some serious problems there as well. So that’s in the industrial or commercial markets.

But then you have naval, maritime-type environments, where you have salt fog that can cause problems. High-humidity environments, in both the industrial and maritime environments. We do see these things deployed even in oil rigs, right? So that’s a very dirty environment, but also a high-moisture environment. And then I think one of the—I think David pointed this out—regarding the EMI perspective is a big issue.

But one of the other big challenges that we run into is actually the power interface. So, generally IoT devices are designed for being in an office environment or someplace where you have a good, regulated input voltage that’s held within a tight tolerance and generally doesn’t have interruption. Whereas out in the field you may not have 120 volts AC like we have at the outlet of our home or our office. Some of these places have 28 volts DC. Switch yards having 125 volts DC is their common switchgear power. Aircraft is 28; automotive, 12 volt DC distribution.

And generally these devices are not designed to take that input power. But also there’s voltage transience by nature of those things, starting surges that pull the voltage down. So in many cases you can solve these problems with a voltage regulator or a power-conversion device in between, but those things can add an additional reliability challenge. So those are some of the challenges that we find in trying to bring a commercially available IoT device that has great technology, great functionality that it’s bringing to the table, but trying to deploy it in the field.

Christina Cardoza: Yeah, absolutely. And I want to come back to some of those practices you said, to sort of get over bringing these devices out in the field, how you can be successful. Before we get into it—you’re right, these devices were first designed and thought of to be in, like, an office space, or somewhere where the environment doesn’t change very often. And when we think of rugged IoT, it’s often bringing these out in the field, making sure that they can handle and withstand all the environmental concerns and, like you mentioned in mining environments and other areas, like dust—just everything that needs to be considered.

But there’s also—David, you mentioned like bringing it into the factory and manufacturing—there’s also these industrial environments where you are still indoors, but there are still those temperature concerns or other concerns you need to worry about. So can you talk a little bit about when you’re not out in the field, what are the other environmental challenges that some of these factories or other businesses and industries need to worry about even though they’re indoors?

David Zhu-Grant: Yeah, exactly. So, I think some of these environments, you call them non-carpeted environments. So they’re still indoors, but they’re not like an office-sort-of environment. And they still suffer from the same sort of challenges. So you still see temperature extremes. They’re often not air conditioned, so you get very cold, very hot, airborne particulates; that was mentioned before. Sometimes it can be conductive, sometimes it can be dust that can really impact cooling and also reliability from a circuit board perspective.

I think the other one I mentioned was electrical interference, and that’s a big one. So, a lot of these warehouses and factories have big, either air conditioning units or big machinery that starts up or stops, and that can cause a lot of EMI issues that need to be dealt with. And it’s a bit of a challenge for these devices.

And then, also being indoors, you still have to deal with those factors for reliability and long term. So this is not just a temporary interruption; it can also be a reliability issue long term. So if they’re staying too hot too long, your computer’s not going to last. And so that’s a factor of long-term reliability in these indoor environments—is still a factor.

Christina Cardoza: Great. Now, we’ve been talking a lot about the challenges for IoT devices out in the field in these more rugged environments, but I want to dig deeper into, now that we know the challenges, how do we overcome the challenges? How can we create these devices to withstand these environments?

Chad, you started talking about some of this, so I want to start with you there. How can we design technology and the hardware to survive in the harsh environment—whether it’s out in the field or in a factory?

Chad Hutchinson: Well, from a mechanical perspective, shock and vibration—you really have to prevent any differential movement of the circuit cards from one another. Most computers these days have cards plugged into cards, things of that nature, and those are target places where dissimilar movement causes problems. Also, when you have flexing of the printed circuit board, that puts stress, repeated stress, on the solder joints. So if you can stop or prevent that movement by a really rigid chassis, that can help with shock and vibration.

When you talk about things like humidity, salt fog, even fungus believe it or not—that’s actually something we have to deal with—you’re really trying to put a barrier between the outside world and the electronics itself, and that’s generally a coating, a conformal coating of some kind that’s providing a barrier that’s an insulating barrier that prevents that contact.

Heat is a big challenge. When you have high temperatures and you’re looking for cooling sources, whether or not you could put more air across something—heat-sinking components that weren’t originally heat sunk. Some things are open air but don’t have a heat sink, a formal heat sink, on them. Converting to liquid cooling, and plumbing in a source of cooling water is another thing that you’re seeing in electronics these days.

The power distribution or immunity is typically dealt with by input filtering on the front end that will address or prevent spikes and things of that nature from interfering with the supply. Brown-out situations, where you have voltage dip, you might actually use a capacitive bank. Or some cases you’ll use a UPS—that is, a battery-backed source—to address those issues. So, in short, you look at each and every one of the factors that the environment is affecting the device, and you knock those things down one at a time. And for each of those things the industry has pretty much found a way to solve those.

Christina Cardoza: Now I’m curious, Chad, because when we talked earlier in the conversation, talking about how these devices weren’t made for these environments, and now we’re talking about some of these capabilities that we can add on to ensure that these devices can withstand these environments.

But I’m curious, what is the approach that businesses are using on this? Are they adding some of the capabilities and advantages like you just mentioned onto their existing IoT devices and applications that they have? Or are they building something specifically designed for the application and the environment that they are in?

Chad Hutchinson: That’s an excellent question. You know, when businesses are looking at IoT, in many cases they’re looking at the functionality and capability that it brings to the table. It’s allowing monitoring of some equipment and a real-time data transfer, communication back so that we can make decisions and do things with that information.

And the first piece of that, you want to look at the capability that the device brings to the table, and you want to look first to the commercial environment, because that’s where the technology-refresh cycle, the latest cutting edge, is going to be. Primarily because that’s where the big market is, is things for the mass market. If you can find an item that is commercially available that has the functionality that you need, you want to go off and look to deploy that, test it in the environment and see in what ways does it, does it not, perform. Identify those things, and if it works for you, provides the functionality, and lasts, then you’re probably done. In other cases you may identify that no commercially available item has the functionality that you actually need for your application, in which case you’re into a custom solution right from the get-go.

But let’s assume that the commercially available item can meet your needs, but just won’t survive in the environment that you have. And that’s when ruggedized or militarized commercial, off-the-shelf electronics can really come to play at a more reasonable cost—albeit higher than cost alone—considerably cheaper than a pure ground-up custom solution, and that’s where you start going through and identifying only those functions of your specific environment that are causing you a problem.

If humidity and sand and dust are not really an issue, then you don’t add coatings and things of that nature. If temperature is not really an issue and it’s really just a power issue, you look primarily at bolting on the corrective or protective functions that are necessary for your application. And if you follow that type of an approach, then generally you end up with the most cost-effective solution. There are times when you do look at the environment and you realize that we’re at the limits of what we can do to make a commercially available product survive in the environment, and if you get to that point, you are really in a more custom-application specific design.

Christina Cardoza: Yeah. And I like what you just said there—you want to be able to find the capabilities for your needs, be able to find what’s out there and not adding too much to make it more cost effective.

So, Jason, I’m wondering how you’ve seen companies and customers add some of these capabilities, especially when they’re making these investments and we’re thinking about cost. You know, a lot of businesses want to make sure that the investments that they are making not only meet their needs today but future-proof them. They’re looking out to the future, what needs they may have or what needs they may come across that they don’t know that they have. So, how can they ensure that these improvements and capabilities that they’re adding on or that they need can continue to evolve and scale with their needs?

Jason Lu: Yeah, like Chad mentioned, right? This needs to be planted in the very, very early stage. A rugged device of course will be more expensive from a cost or even development perspective. So first the solution needs to work under the intended operating environment, right? You want to make sure that it will survive and then will not fail over a long period of time—especially for IoT devices, they tend to operate 24/7. So you want to make sure that the product that you’re going to deploy is going to survive for the planned operation duration.

And in order to achieve that goal, the design, review, and validation will be longer, right? Usually it is not like you go out and purchase an office PC: you shop around and probably kind of pick the one in, I don’t know, in Costco, and bring it back. This one you do need to plan, validate, make sure it meets your requirements. For example, you may want to see—you want to do this compute more locally, or you want to rely on the cloud. If you want to do more compute locally, that means you’re going to bring in more compute power, and then you’re going to have more heat, then you have more operating temperature you need to deal with. But when you put this more on the cloud, then the connectivity, the reliability of that connection is something that you want to take into consideration as well.

So this needs to be well-planned ahead, and then you decide what you want to accomplish. And then, during the development, then you can start thinking about, you want to build this in-house or you want to outsource it. Commercial off-the-shelf usually can be more cost effective, but then again you are relying on a supplier to supply that piece of solution to you. Would they keep it—longevity — so that you can purchase it even five, ten years from now? And that’s the consideration—is more from view or buy kind of decision.

And then during the operation, right? How do you want to maintain it and provide service to the devices? Do you want to consider remote-update capability, or even how we upgrade, considerations. So, because of the time it would take in development validation, so the initial cost usually will be more. And then you want this to operate, the longer the better, and spinning the wheel to do the engineering effort.

So once you take all this in into consideration, then you can start thinking about whether the return on investment can be justified. For example, like the kiosk—if this is an indoor environment, a regular, fan-based commercial PC might work. But if this kiosk’s going to be outdoors, then most likely you have to go through the rugged route, even though that’s going to be more expensive. But in terms of the operation life cycle, that will keep you running for the longest time. So, with those kinds of consideration you’re taking early on in, during the design phase, would be able to justify your investment in the long run.

Christina Cardoza: Yeah, absolutely. And those are some great considerations and recommendations you should be thinking about as you go on your way. So, I’m curious, Jason, do you have any real-world or customer examples that you can share with us to help us visualize this a bit more, how you helped your users really realize the benefits and make rugged IoT a reality? How you helped them, what their challenges were, and what the results they saw were?

Jason Lu: One example, actually a good example that I can share is a robotic arm in a recycling plant. So, the recycling plant, now they deploy a robotic arm to do the picking—sorting and picking, right? And you can imagine that’s not a very friendly operating environment to the computer, especially the dust inside. They get dirty very, very quickly, and traditionally because they want to use a robot on the pick, they are doing AI vision computers as well. But to do vision processing you actually—you need to use a GPU card in order to be quick enough to react to the conveyor that’s taking that plastic going through. So they have to use the GPU, and GPUs would consume power and then come with a cooling fan.

So it’s kind of a catch-22 situation. In a very dusty environment, you have a GPU, you have a fan; they accumulate very quickly and then they get defective very quickly. So they put it in an air conditioned enclosure. So it is still expensive to build and hard to maintain because that environment is still not friendly to an air conditioning machine.

So what we did is actually we developed a fanless solution for a very high-performance CPU and a very high-performance GPU—totally. We’re talking about 300 watts. But we put them in a fanless chassis. So now you have a machine that’s inside a big, metal aluminum enclosure with the fin sticking out. It’s heavy, but once you deploy, you can deploy and you can forget about it, because, the fanless design, it doesn’t care about the dust. And then the operating temperature within that, not totally outdoors, but still you don’t have air conditioning.

So right now that project is in the DVT phase. So the customer is very excited to see that kind of machine working in the field. So once they start deploying this, I think, from an operating-maintenance perspective that will relieve a lot of rolling the truck situation from the operations perspective.

Christina Cardoza: Yeah, absolutely. And as we’re talking about making sure you can future-proof or validate your investments, I also think once you see the success of one project where, like the robotic arm, and one area of manufacturing, you can start building on it and bringing more of your devices to these environments.

David and Chad, I’d love to hear from you guys how you’re helping customers—some examples, if you can provide any. I’ll start with you, David from OnLogic, how you guys have been helping the end users.

David Zhu-Grant: Certainly, yeah. We’ve got one really good example which is in the mining-industry space. It’s a company called Flasheye. And so they use LiDAR, which is laser-ranging or laser-scanning technology to detect anomalies and prevent stoppages and malfunctions and accidents within that mining-materials transport application.

So this specific example was actually a belt, and you think of like a conveyor belt with mining materials, rocks, and so forth going down it. And they’ve built this really smart solution that uses computer vision again, so the AI is based sort of looking at what’s happening with the rocks and the flow and the belt, understanding spillage on the sides, and then also further outside of that, whether the people are in dangerous zones. So all doing that all simultaneously.

And the system’s also located in a really harsh environment. It’s a mining example anyway, so there’s that dust element to it, but it’s also in northern Sweden, one of the examples. So basically arctic conditions are really cold, and all this other dust and vibration and debris going around. So, really harsh environment, and they needed a computer that’s going to be rugged.

So obviously they picked an OnLogic computer, but the overall solution’s been so successful that that’s actually won them some awards, innovation awards, in the mining industry. So, really reducing things like accidents, making it more safe, and basically improving efficiency so they can have less downtime when those belts are damaged or issues are there with the material flow. So we thought that was a really good example of using a rugged device out in the field, and specifically with this kind of emerging AI space, as well with computer vision.

Christina Cardoza: Yeah, absolutely. And I love that example, because it’s not only about the rugged IoT device, like you said, it’s really an end-to-end solution sometimes.

You know, you guys are making these changes to withstand these environments, but there are other benefits and improvements and enhancements that you can get by doing this. Chad, wondering if you’ve seen similar things with your customers or how you’re helping your customers bring these devices to rugged environments?

Chad Hutchinson: Yeah. So, I think Jason had a great example regarding the thermal challenges that you have in trying to bring a GPU into a field application. They generate a lot of heat. So, my example is an autonomous vehicle application that we did, and that customer had—it was a computer using LiDAR, radar, and sensors to have computer vision and figure out the picture for the driving scenario and whatnot. So, needed considerable computer horsepower out in the vehicle itself.

Challenges were primarily thermal, obviously, with multiple GPUs, and trying to get that heat out of the automobile. And shock and vibration, but then also power interface, because generally computers, servers, IoT devices, they’re designed to operate on 120 volts AC, and as I mentioned before, automotive is a 12 volt DC system. But also during a starting surge that voltage can actually get pulled down to 9 volts. So it can be really, really difficult when you’re dealing with as much power—as in excess of 2 kilowatts of power being drawn from an automotive type application.

So we designed a custom power supply that was designed for the specific environment of 12 volts DC input with an ATX power output, which then interfaces with your commercially available motherboards and commercial off-the-shelf electronics. Likewise with the GPUs, custom heat sinks that were liquid cooled and got that heat away from the device and allowed you to get it to the outside of the cab of the vehicle to an external radiator so that you can exhaust that heat.

And then shock and vibration of course, we’ve dealt with before in terms of a very solid, stiff chassis that protects the electronics from that movement. So it’s a very interesting project, and it’s working very well out there in the field for the customer.

Christina Cardoza: One thing that I like about all of these examples you guys provide is obviously we’ve been talking about the devices, the hardware, the embedded devices and computers that it takes to bring IoT to rugged environments, but there was also an aspect like, David, you mentioned, like an AI or computer vision aspect to all of this in addition to hardware—and I should mention the IoT chat and insight.tech, as a whole, we are sponsored by Intel. But we see an ongoing theme within all the articles and the partners that we talk to, just this “better together.” No one company can really do this alone.

And when we’re talking about rugged environments and getting the hardware up, there’s also other aspects, other technology that goes into making this all successful. So I’m curious how you guys work with partners like Intel to make some of these things that we’ve been talking about happen. David, if you want to take that one.

David Zhu-Grant: Certainly, yeah. So, obviously the hardware’s a really big part of it, but the software layer and stacks on top of that really unlock the features in the hardware. And I think that’s where Intel’s been a really great partner for us. So the vPro brand’s been really good for us. We’ve had good examples where Intel’s partnered with us, and they’ve helped us connect the dots between the hardware we provide, software—there might be software providers—and integration partners.

We had a good example where a customer needed some remote management, out-of-band management, and Intel stepped in and gave us the resources and the people to talk to—really speaks to the value of vPro and how that really helped this customer solve a problem without having to go to really big, out-of-band management solutions, that the vPro and AMT really helped with.

So I just, I think that none of these technologies work in isolation. It takes a lot of these interconnected systems and knowledge to successfully implement the winning IoT solution, and I think that’s where working closely with folks like Intel really helps, and I think that ultimately the customer succeeds from that collaboration.

Christina Cardoza: Yeah, absolutely, especially looking at the robotic arm example that Jason mentioned. You know, you have all of these different sensors and technologies and software going into this to make it happen. So, Jason, is there anything else you can add about working with partners, the importance of collaboration, and using Intel technology, anything like that?

Jason Lu: Yes. ASUS, we developed motherboard and system solutions, right, but still based on the CPU developed by Intel and other companies in the embedded applications. So power efficiency is usually the most important factor. The less power the CPU consumes, that means you have less heat that you need to deal with, makes it easier to put together a system solution.

So Intel’s technology is usually on the leading edge, and on top of that, embedded. Once you invest a lot in development and validation, if you want the product to be available for 7, 10, even 15 years, and Intel is very good in providing longevity support for selected CPU, and that’s kind of what we picked those CPUs to base our development on as well.

And the other thing I wanted to bring out is we, as a motherboard assistant builder, we put together a hardware environment, but on top of that you also have OSs, and then on top of that customers build their application. The OS level still has a lot of fine tuning that you need to do to optimize the CPU performance. And Intel actually has a so-called development kit. They put together the recipe for that particular system, and then they point their customer to a particular hardware setup. With that kind of fine-tuning system, then the application on top of that will be able to utilize the performance that the Intel CPU can deliver.

So I think that’s a very kind of good collaboration between the companies to provide a solid foundation for the IoT applications to build upon it. So that’s something that the customer can appreciate, because that save a lot of time when they are trying to deal with those drivers to get the performance up there that does save a lot of development time.

Christina Cardoza: Yeah, absolutely. And it comes back to the conversation we were having a little bit earlier about future-proofing your investments—partnering with a technology partner ecosystem that, as the landscape changes, they’re also updating and keeping aware of what’s going on so that any changes that do need to happen can happen easily and can scale to the needs that we have for tomorrow.

So this has been a great conversation, guys. I think we are running a little bit out of time. So before we go I just want to throw it back to each of you guys if there’s any final thoughts or key takeaways you want to leave our listeners with today. Because it’s been a big conversation, and I’m sure there’s plenty more to cover, but is there anything else you want our listeners to take away from this conversation? Chad, I’ll start with you.

Chad Hutchinson: Thank you. You know, I’ll just say that when you’re operating at the edge out in the field, you’ve got harsh conditions that you’ve got to deal with, and whenever you’re trying to bring any kind of technology to include IoT devices, that you need to be thinking about those things upfront as part of your project and figuring out how to deal with those challenges as part of your project development.

So do some testing upfront, after you’ve identified your component. Do your testing, figure out what challenges you have, and then go do targeted solutions for those. And I would encourage you to get in touch with a partner who has those capabilities, can bring some of that expertise to bear, and help you with your project.

Christina Cardoza: Absolutely. And, David, anything you want to leave our listeners with today?

David Zhu-Grant: I mean, kind of echoing Chad’s points there a little bit, I think if listeners have environments that have been described today in the session, I think it’s really important to pick a partner—not just someone that’s going to sell you a box, but someone who is going to work through, understand the industry you’re in, the challenges you’re facing, and one that’s going to help guide you to that right solution as well. I think trust is really important. So, peering, partnering with someone that’s reliable and trusted from a knowledge perspective, from the quality of equipment that they produce as well, and just the hardware itself.

I think for us at OnLogic, we really try and focus on that: the right fit, the right solution, the right support—really looking at helping advance customers, advancing their ideas anywhere basically. But it’s important for customers too, and listeners, to pick that right partner. So I think that’s just the key thing I would probably want to take away from this.

Christina Cardoza: Yeah, absolutely. It’s not only about the environmental concerns, but you also want to make sure that you’re working with somebody that wants to see you get over those environmental concerns, but continue to be successful and add onto that. So, great final thoughts.

And, Jason, anything you want to leave us with today, or anything you think our listeners should really be thinking about as they go forward with rugged IoT devices?

Jason Lu: Yeah, I’ll echo what Chad and David mentioned—that usually you would like to work with a partner. So I think that’s a very important part of a recipe for a ASUS’s deployment. Just want to point out that I mentioned ASUS IoT business group; I’m on the IoT—that’s more on the hardware side. We actually also have AI ,because a lot of the IoT deployment also started to incorporate the AI capability. So we do have AI solutions that we can provide as well. So hopefully that can be beneficial to the customer that’s looking for this type of solution.

Christina Cardoza: Yeah, absolutely. And I would encourage all of our listeners, as you guys go look forward to bringing rugged IoT devices into your solutions, or how this can work for your business, visit ASUS’s, Crystal Group’s, and OnLogic’s websites to keep up with their innovations and see how you can partner with them to make some of this happen.

I want to thank you guys all again for joining us today. It’s been a very insightful conversation. And thanks to our listeners for tuning in. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Builder Kits Speed Autonomous Mobile Robot Development

A robotics revolution is underway. Computer vision and edge hardware advancements are driving a shift from automated guided vehicles (AGVs) to autonomous mobile robots (AMRs)—and the change will transform the way we work.

Here’s why: AGVs and AMRs may be superficially similar, but they differ enormously in their capabilities. AGVs can follow only fixed paths and require the installation of magnetic tapes and tracks to function. They’re useful for simple transportation tasks in a warehouse—but not much else. AMRs, on the other hand, can operate independently in complex environments, process large amounts of sensor data at the edge, and make real-time decisions about what actions to take. These advanced capabilities mean AMRs will find use cases in multiple industries.

For example, logistics robots will be able to move pallets on their own, operate as forklifts, and perform complex picking tasks. In agriculture, AMRs can be used for precision seeding, irrigation and spraying, and fully automated harvesting. They will also be useful in heavy-duty applications like construction and mining, as they can handle jobs as varied as excavation, earthworks construction, demolition, paving, and roadwork.

The potential benefits to businesses and workers can’t be overstated. “AMRs offer all of the well-known advantages of automation: efficiency, accuracy, and the ability to perform repetitive tasks so humans don’t have to,” says Brandon Sokol, Marketing Specialist at Axiomtek, an industrial computing specialist that provides an all-in-one AMR development kit for SIs and OEMs. “In addition, their mobility makes them a rich source of real-time data for business intelligence and process optimization—and they will improve safety by keeping workers away from hazardous areas and taking over strenuous physical tasks that can cause injuries.”

In short, the market for AMRs is extremely promising. And the good news for SIs, OEMs, and systems architects is that they now can reach that market faster thanks to development kits offered by industrial computing specialists.

AMR Builder Kit Fast-Tracks Development

Axiomtek’s collaboration with a large manufacturer demonstrates how these builder kits can fast-track AMR solution development—even when extensive customization is required.

A prominent single-board computer (SBC) and industrial PC (IPC) manufacturer wanted to use autonomous mobile robots to transport materials between storage and production areas at its manufacturing facility. But several technical challenges had to be overcome. The manufacturer needed a solution that could support multiple high-resolution, low-latency camera sensors to improve obstacle detection and ensure safety and efficiency. The AMR also needed to be able to carry heavy payloads through extremely narrow spaces. And it had to be highly reliable because it would play a critical role in the production process.

Axiomtek worked with the manufacturer to design, test, and deploy a customized solution based on its AMR Builder Package. The vehicle body was equipped with LiDAR, ultrasonic sensors, and high-bandwidth stereo cameras. The company’s development support team designed in additional high-resolution video channels and upgraded the system’s main processor unit to handle the additional video-processing workload and ensure fast, reliable performance at the edge.

Axiomtek’s engineers also included a Wi-Fi / LTE / 5G module so the AMR could connect to the site’s IT network and provide real-time status updates. And private 5G high speed, low latency, and large-scale transmission of images enables precise positioning—allowing data analysis and remote operation more easily. Finally, they custom-designed a vehicle body with an ultra-compact form factor—but still rugged enough to carry loads more than 100kg and perform reliably in a harsh manufacturing environment. The AMR specs include:

  • Anti-vibration 5 GMS with storage (5 to 500Hz, X/Y/Z direction; random, operating)
  • -20C to +70C degrees operating temperature for outdoor applications
  • Telematics for wireless communication and remote/fleet management

The AMR solution was rolled out successfully at the manufacturer’s facility and has met all expectations for performance in the challenging physical environment—navigating effectively through spaces as narrow as 5cm on either side of the unit.

“By using an AMR builder solution that packages these elements together from the outset, developers and #SIs can reduce their design and integration efforts significantly and go to market faster” – Jerry Huang, @Axiomtek via @insightdottech

“AMR design tends to be difficult because of the sheer complexity involved: You have multiple sensors and components, AI computer vision and edge processing software, and of course the need to integrate and control all of that,” says Jerry Huang, Product Solutions Manager at Axiomtek. “By using an AMR builder solution that packages these elements together from the outset, developers and SIs can reduce their design and integration efforts significantly and go to market faster.” (Video 1)

Video 1. Axiomtek AMR Builder Package includes sensors, software, controller, and development support. (Source: Axiomtek)

Axiomtek credits its technology partnership with Intel for helping to streamline the development process. “Intel makes cutting-edge hardware that is ideal for AMR applications,” says Ryan Chen, Axiomtek’s Director of Engineering. “Their processors excel at computer vision and edge processing tasks, and Intel® RealSense cameras are an integral part of our solution because they offer exactly the type of high-resolution, low-latency video that AMRs need for object detection.”

“Intel offers powerful software development toolkits for use in AMR development, in particular OpenVINO and Edge Insights for Autonomous Mobile Robots (EI for AMR),” says Cynric Chiu, AMR Product Manager at Axiomtek. “These software tools have been essential in helping us bring our solution to market and in shortening development time for our customers.”

AMRs of the Future

In the years ahead, SIs, OEMs, and systems architects will be able to take advantage of simplified AMR development to bring a greater number of customized solutions to market. But AMRs will not become just more prevalent—they will also evolve, says Chiu:

“Tomorrow’s autonomous mobile robots will be far more proactive than the ones we have now. We expect to see the next generation of AMRs include features like AI-driven learning and predictive maintenance. Those AMRs won’t exist in isolation, either. They will be part of an increasingly connected mesh of AIoT technologies that is already beginning to develop—a synergy that will add even greater value wherever AMRs are deployed.”

“The interconnectedness of AMRs with a wider AIoT ecosystem will help to optimize logistics, improve traffic management in smart cities, and enhance overall system efficiency across numerous industries,” says Chiu. “AMRs will become versatile tools adaptable to multiple sectors, from agriculture to healthcare, facilitating broader adoption and delivering safety, efficiency, and productivity benefits all over the world.”

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

AI Matures at Intel® Innovation 2023

Last year, Intel® Innovation 2022 focused on democratizing AI for everyone, and that commitment shone through at this year’s event. Intel® Innovation 2023 still had the same underlying mission to make AI more accessible to developers and partners, but showcased even more Intel® processors, software, and AI capabilities.

“AI going forward must deliver more access, scalability, visibility, transparency, and trust to the whole ecosystem,” says Pat Gelsinger, CEO of Intel.

Gelsinger demonstrated exactly how Intel is at the forefront of this next generation of AI with the announcement of new solutions and technologies infused with AI.

For instance, the company unveiled its Intel® Core Ultra processors, code-named Meteor Lake, expected to arrive December 14 with low-latency AI compute and new opportunities for AI experiences.

For Intel, the Core Ultra represents a lot of firsts:

  • First device built on the Intel® 4 process node
  • First to integrate a neural processing unit (NPU) AI co-processor
  • First to include a low-power island that independently executes certain workloads

Mix in heterogeneous tiles of on-die CPU and GPU cores, and in the Core Ultra you have what Intel believes will kick-start the “AI PC” era.

The AI PC Era Isn’t About Hardware

The Core Ultra (as well as 5th Gen Intel® Xeon® processors “Emerald Rapids” CPUs, also available December 14) offers a glimpse into workloads Intel believes will drive computing over at least the next half-decade—AI and computer vision. Look deeper, and this was confirmed by software releases at Innovation 2023 like the general availability of the Intel® Developer Cloud for building and testing high-performance applications, which also includes a free tier to help developers kick-start their AI development.

Pat Gelsinger demonstrated exactly how Intel is at the forefront of this next generation of #AI with the announcement of new solutions and #technologies infused with AI. @intel via @insightdottech

To streamline the development of AI apps, the Developer Cloud contains a variety of software tools, including the Intel® oneAPI that provides unified interfaces for programming heterogeneous compute nodes, and support for the OpenVINO toolkit, which was updated to version 2023.1 at the show.

The OpenVINO 2023.1 release comes packed with enhancements for PyTorch solutions, broader support for large language models (LLMs), and greater AI workload portability across edge and cloud. But a lot more has been going on under the hood, such as improved integration between OpenVINO and the Intel® Geti AI modeling suite, which was evident at the show. Geti is designed to help domain users build computer vision models in less time with less data. It accelerates dataset labeling, model training, and export while developers leverage OpenVINO to optimize models for deployment on Intel® silicon and streamline AI applications from end to end.

AI Everywhere at Intel® Innovation 2023

The impact of AI could also be seen all over the Innovation 2023 floor as the Intel AI ecosystem showcased its new and innovative solutions. For example:

Those were just a few of the AI ecosystem partners on location at Intel Innovation 2023 exhibiting the potential of deploying AI everywhere on top of Intel technology. Other partners included edge data intelligence firm EPIC iO Technologies, video analytics company SAIMOS, intelligent surveillance system provider Irisity, AI-backed video monitoring company Vehant Technologies, and private 5G pioneer Juniper Networks. And those were just the AI partners.

Vendors from all walks of industry and vertical markets ranging from telecommunications to transportation displayed how AI empowers what CEO Pat Gelsinger calls the “Siliconomy”—a new era fueled by AI, ubiquitous compute, connectivity, infrastructure, and sensing. And they were all doing it on top of Intel technology that’s available today.

See what else you missed at Intel Innovation 2023, and learn what the era of “Siliconomy” means for the future of AI.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

New Retail POS Solutions Transform the Checkout Journey

Seeing checkout lanes shut or malfunctioning in stores understandably frustrates Matt Redwood, Vice President of Retail Technology Solutions at Diebold Nixdorf, a provider of retail and banking technology systems.

But it doesn’t have to be this way, Redwood maintains: “In-store POS solutions have evolved beyond basic point-of-sale cash registers. And retail stores can—and should—update their in-store technology to better serve customers.”

This is especially true post-pandemic as customer shopping behaviors have changed. Consumer shopping habits have converged, moving between online and in-person shopping. What has not changed is the customer’s sky-high expectations. And since online shopping is just a click away, physical stores have had to invest in a major technology refresh and revolutionize their in-store experience to give consumers a reason to visit.

A seamless and prompt checkout process is part of that shopping experience. Today’s consumers are adopting self-service because they recognize that it gives them more control over the transaction. It also gives them other benefits like shorter queues because the throughput is much quicker. Retail businesses are reciprocating by offering a range of flexible and hybrid kiosks and checkout stations.

Sophisticated #AI-powered #technology options help retailers tackle the biggest pain points at the checkout. @DieboldNixdorf via @insightdottech

Flexible Retail POS Systems

Diebold Nixdorf offers retailers a variety of checkout solutions for transactions—from traditional POS systems to self-checkouts. Self-Ordering Kiosks, too, are migrating from the quick-service restaurant (QSR) and hospitality space to traditional retail such as in-store ordering of individual bakery items or selection of paint color in DIY.

The company also offers a modular checkout system that is especially attractive because retailers can choose to convert a station to self-serve depending on demand forecasting like in-store rush or labor availability. Rather than having checkout lanes shut because retailers don’t have staff, they can open them in self-service mode.

In addition, sophisticated AI-powered technology options help retailers tackle the biggest pain points at the checkout. For example, they reduce friction points like purchasing age-restricted products at self-checkouts or buying produce and/or using a menu to find produce for scanning.

Placing more of the transactional process in the hands of the customer frees up staff to attend to other ways in which they can deliver a better consumer experience. Task-switching between attendant and consumer also can lead to faster checkouts. In clothing, for example, an attendant can remove security tags and bag clothes while customers pay for them.

Open Platform for POS Innovation

The BEETLE platform is the foundational element of Diebold Nixdorf’s portfolio and underpins every retail touchpoint. “Once a retailer invests in the solution, they have the full flexibility of not only all our traditional POS systems but all our self-service and kiosks running off that same platform,” Redwood says.

The company’s retail technology solutions are flexible, support multiple operating systems, and are compatible with new components or parts as others get phased out. Retailers are putting their focus on having the highest availability possible on POS systems. Diebold Nixdorf offers 99.8% availability when used with the services platform plays into that.

Diebold Nixdorf’s open software platform enables retailers to connect to various legacy solutions and quickly implement new scenarios. The Intel processor-powered BEETLE solution enables the software and hardware to be dissociated from each other so each can be upgraded as needed without affecting the other. “Ongoing management costs of running one of our products in the field is lower than a lot of our competitors because there’s less software work that needs to be done when migrating to a new component,” Redwood points out.

Retail Technology at Work

Implementation of BEETLE has helped retailers provide more seamless service across verticals while making operations easier to manage.

For example, one European retailer wanted to implement a complete store transformation to ensure all aspects of its business—grocery, clothing, and home—would work together. The company chose the BEETLE line to ensure a common hardware platform across all three parts of the business with different form factors, an easier architecture to maintain.

Similarly, a grocery retailer developed its proprietary software benefits from the modularity of the BEETLE platform and its compatibility with software.

For now, thanks to systems like BEETLE, retailers don’t have to be wedded to their outdated POS systems. “Retail development cycles are much quicker today, and the BEETLE platform evolves with them,” Redwood says. “You don’t have to suffer with out-of-date technology anymore. You can upgrade it to always make sure you’re providing the latest and greatest technology for your customers.”

If Redwood and the DN team have their way, no longer will he (or we) have to wait in endless lines at any store.

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

IoT Edge Computing: The Road to Success

IoT edge computing is known across industries for its ability to bring computation closer to where the data involved is generated. This allows for the real-time insights, high performance, and low latency that businesses need to succeed today. And what business wouldn’t want that? But some industries, like manufacturing, may have a more difficult digital transformation journey before them—with legacy infrastructure and little or no downtime to make changes. And some organizations are unsure how best to bring this transformation about, with all its various advantages and complexities.

For guidance on this journey, we turn to two people with deep backgrounds in IoT edge computing and extensive knowledge of the field: Martin Garner, Head of IoT Research at CCS Insight; and Dan Rodriguez, VP and General Manager of the Network and Edge Solutions Group at Intel. They’ll take us through some of the challenges and opportunities involved, and remind us that no one has to—no one should—go it alone (Video 1). In addition, CCS Insight has made available its recent IoT Initiatives to Scale Industrial Edge Computing report to insight.tech subscribers. Check it out, and get your ticket to digital transformation.

Video 1. Martin Garner from CCS Insight and Dan Rodriguez from Intel discuss the state of edge computing, how to overcome challenges, and key opportunities to look forward to. (Source: insight.tech)

What can you tell us about the benefits of edge computing today?

Dan Rodriguez: Edge compute is driving an incredible amount of change across all sorts of industries. And it’s fueling digital transformation, as businesses aim to automate their infrastructures and improve operational efficiency. AI, along with the advent of 5G, will only accelerate this trend.

Companies want to have more control. They’re seeing supply chain challenges, unstable energy production, sometimes labor-force shortages. They want to find ways to optimize their operations, their costs, and their data. So there’s a lot of opportunity for edge AI here. It can also provide new monetization opportunities. Because companies are looking to save money and manage their TCO, but also of course to make money.

If you think about one industry—manufacturing—we’re already seeing customers start their AI journeys. Initially they’re utilizing it for simple things, such as supply chain management, with autonomous robots stocking or pulling inventory. And then quickly advancing into employing computer vision and AI for things like defect detection.

What is the state of edge computing, and what challenges still remain?

Martin Garner: A huge amount of it already exists across all industries, including quite a lot that you might not even think of as edge computing. And that highlights a couple of things about the whole space.

One is that it’s very broad. It runs from sensors all the way out to local-area data centers. It’s also quite deep. It goes from the infrastructure at the bottom up through the networking, through applications, to AI. And because of those two things, it’s quite a complicated area.

In adoption terms, we think there are several big drivers. One is IoT. High volumes of data are being generated there that need to be analyzed and dealt with in near-real time using machine learning or AI. Also, the telecoms have recently become very interested in the subject as suppliers, with multi-access edge computing and private networks. Last, there’s the economic climate. Many companies are reviewing their cloud spend, and that is a bit of a spur to do more with edge computing.

Tell us about the different opportunities for edge computing across industries?

Dan Rodriguez: Let’s talk about retail. One of the biggest costs that retailers have is theft; believe it or not, it’s a $500 billion-a-year problem. The use of computer vision with AI can attack this problem, helping to prevent theft at the front of the store, i.e., at the checkout area; in the middle of the store, where you sometimes get in-aisle shoplifting; and even in the back of the store, where there can be theft in warehousing and distribution centers.

And when you think about how retailers make money, they can utilize AI in all sorts of new and interesting ways. The shopping experience, for example: It can provide feedback on different merchandising-display strategies. It can quickly identify when items are out of stock on the shelf. Sometimes very simple things can really lead to better results.

“#Manufacturing processes are being streamlined onto fewer and fewer #software-defined platforms, which increases overall efficiency and reduces the infrastructure’s complexity” – Dan Rodriguez, @intel via @insightdottech

Then consider manufacturing and industrial edge computing; it’s going through a massive transformation in the types of infrastructure that are deployed. Generally speaking, manufacturers are moving away from what I would call fixed-function appliances—appliances that do one thing very, very well—to more software-defined systems that are easier to manage and upgrade.

So diverse kinds of manufacturing processes are being streamlined onto fewer and fewer software-defined platforms, which increases overall efficiency and reduces the infrastructure’s complexity. And once you have this software-defined infrastructure in place, then you start combining it with the use of robots, with sensing, with 5G and AI. Then you can do all sorts of magic across a factory floor, helping with everything from inventory management to defect detection.

What challenges do manufacturers face when it comes to industrial edge computing?

Martin Garner: The opportunities are huge, but honestly there are a few challenges, and some of those are faced by anyone using edge computing.

The first one is scale. Industrial edge computing is one of those technologies where it’s quite easy to get started and do a few bits and pieces, but as soon as you scale it up, it all becomes trickier. The larger players will have thousands of computers on tens of sites across multiple geographic regions. And they have to keep all of that updated, secure, and synchronized as if it were a single system in order to make sure they’re getting what they need out of it.

Linked to that, with a large estate of edge computing, you end up with a really big distributed-computing system, with things like synchronization of clock signals, of machines, of data posts into databases. On top of all that there are different types of data going through the system and a different mix of application software—some cloud, some multi-cloud, some local data. All of that needs a complex architecture.

There are also a couple of challenges that are probably specific to the manufacturing and production industries. One is real-time working, which is a special set of demands that, by and large, IT doesn’t have. There are feedback loops measured in microseconds; there are chemical mixes measured in parts per million. Timeliness and accuracy are incredibly important. And what’s really important is that it’s a system-level thing—not just one component but the whole system has to cope with that.

And then there’s the robustness of the system. Many factories work three shifts per day, nonstop, 365 days a year. An unplanned stoppage is a really expensive thing—millions of dollars per day in many cases. All of the computing has to support that with things like system redundancy, hot standby, and automatic failover. That’s so if something goes wrong, the system doesn’t stop. That means doing software patches and security upgrades live, without interrupting or rebooting the systems at all. It also means that if you need to expand the hardware—say you want a new AI—you’ve got to be able to put that in without stopping the production line.

So hardware and software need to be self-configuring and cannot break other things down. Again, those are constraints that IT doesn’t have, but in the industrial area they are things that just have to be worked with.

And how can manufacturers approach those challenges most successfully?

Martin Garner: The first thing we would recommend is don’t build your own infrastructure. It’s too slow, too much resource, too expensive over time, and it’s a specialist area.

The second thing is to design the system around modern IT and cloud-computing practices. It should be almost seamless across them. And there are lots of good technology frameworks to choose from, so most of the customer-design work can focus on the application level.

Third, in the operations-technology world, equipment and software lifetimes are typically 10 to 20 years. We think with edge computing it’s sensible to plan for shorter lives, 5 to 10 years. The data volumes are going up and up, and the more data you get the more you want to do with it, and the more you can do with it. So you’re going to need more AI, more edge computing capacity, and you’re going to have to expand what you have quite quickly.

How are you seeing manufacturers approach this type of technology?

Dan Rodriguez: As I mentioned before, the first part of the journey is the movement away from fixed-function appliances to more of a software-defined infrastructure. Imagine if you had to have a specific phone for each application you used; that would be really difficult to manage. It’s the same thing on a factory floor. Think how much the complexity would be reduced if more applications were loaded onto fewer software-defined infrastructures.

The future is that servers will host most or many of these software workloads. Then you’ll be able to provide automated updates in a much more controlled way, and they’ll be much easier and more efficient to operate and maintain. You’ll also be able to layer on all sorts of new capabilities.

Give specifics example where these approaches have been used.

Martin Garner: One example highlights the scale issue. A very large university hospital was installing a mesh network to keep track of ventilators and other key equipment, and to gather information from sensors. They did a trial with battery-powered nodes that went well, and they loved it. But they realized that, as they scaled it up to the whole hospital, they would have thousands of devices with batteries to monitor. They would always be changing batteries somewhere, risking dangerous outcomes if it wasn’t done thoroughly. So they asked the supplier to produce mains [grid]-powered versions instead.

The lesson that came out of that for me was that from the start, suppliers have to design to the scale they’re going to face in the end. And customers need to think big in that design phase, too. As Dan mentioned, it is a journey, and you learn a lot as you go through.

Tell us about the importance of partnerships to achieving these goals.

Dan Rodriguez: Intel creates a diverse ecosystem that utilizes both open and standard platforms. And having an ecosystem like this is incredibly important for the overall health of the market; the community will not only have a lot of vendor choice, but it also increases the overall innovation spiral.

Martin Garner: Edge computing is broad, deep, and complicated, as I mentioned earlier. Very few customers can take all of that on. Very few suppliers can take it on either because they tend to specialize. Actually, most of the systems we’re talking about will need to be designed with three to five players involved. And I think that’s the expectation we should all bring to this—that it’s going to be a team effort.

How do you see the role of IoT edge computing in industrial environments evolving?

Dan Rodriguez: The first phase is that migration toward software-defined infrastructure, where workload consolidation supports multiple applications on fewer and fewer servers or devices.

And then obviously, generative AI is all the buzz right now, and over time it will be incorporated into this strategy as well. It’s going to be super exciting to see all the gains in production, the reduction in defects, and also the use of new simulation and modeling techniques in that factory of the future.

Martin Garner: A couple of things came out of our report that are not so big right now, but you can see them coming.

The first one is around mission-critical manufacturing processes, where any unplanned downtime, as I said earlier, is really expensive. A key question there is about how to learn from what’s gone wrong. The aircraft industry has always been quite good at this. The aim is to make systems more and more resilient by ensuring that failure modes are understood and mitigated. Then new scenarios are built up to cope better under fault conditions. That looks to us like an important area for more general use across manufacturing.

Another one is linked to industrial robustness. If an application can run on one machine and automatically switch over to another one if there’s a failure, you have to ask—which is the optimal machine for it to run on normally? And you realize that optimal could mean fastest, it could mean lowest latency, or the highest uptime, or cheapest on capital costs, cheapest on operating. It’s all about optimizing the system in different ways for different things. We haven’t found anybody who’s actively exploring this yet, but we do expect it to become a thing in edge computing fairly soon .

Any final thoughts or key takeaways you want to leave us with?

Martin Garner: It’s a bit of an analyst cliché to say, “Oh yes, but it’s complicated.” But edge computing actually is complicated, and I think many companies see it and get it, but it still feels quite a long way away. From our point of view at CCS Insight, we think it’s key for customers to just get started, working with a few, carefully chosen partners.

At the start you should be fairly ambitious in how you think about all of it and what scale it could get to—knowing you won’t get there all in one go. You’ll probably find, though, that it’s not the technology that’s the limiting factor; it’s probably the organization. You will need to invest at least as much time and effort into bringing the organization along as you do in working out what technology, and how much of it, to use.

Dan Rodriguez: First, edge computing is fundamentally changing nearly every industry. And second, when you combine edge computing with AI and 5G, it’s driving a lot of transformation, which is truly creating a massive opportunity—everything from precision agriculture to sensory robots to cities that intelligently coordinate vehicles, people, and roads.

Third, I do strongly believe that industry collaboration and open ecosystems are fundamental to all of this. As Martin mentioned, it’s going to be a team sport, and multiple players will be needed to drive these solutions and implement them in a way that’s easy for customers to consume the technology and easy for them to scale the technology. And Intel is truly invested in driving this unified ecosystem.

Related Content

To learn more about adopting edge computing, read IoT Initiatives to Scale Industrial Edge Computing and listen to Industrial Edge Computing: Strategies That Scale. For the latest innovations from CCS Insight and Intel, follow them on Twitter @ccsinsight and @intel, and on LinkedIn at CCS Insight and Intel Corporation.

This article was edited by Erin Noble, copy editor.

Gaining Location Intelligence with AI and Digital Twins

Location intelligence is already a part of our everyday lives, from using our phones to get directions to finding a nearby restaurant. But businesses now are starting to see the transformative potential of location intelligence.

In fact, 95% of businesses now consider location intelligence to be essential, thanks to its ability to lower costs, improve customer experience, and enhance operations. But many businesses struggle to get the most out of their location data because it’s often siloed in different departments or systems.

AI and digital twins can help businesses to break down data silos and create a single, comprehensive view of their spaces in real time. AI can be used to analyze large volumes of location data and identify patterns and trends. Digital twins are virtual replicas of real-world objects or environments that can be used to track and monitor changes over time.

In this podcast, we discuss the importance of location intelligence, the use of AI and digital twins for tracking and monitoring, and how to implement AI-powered tracking and monitoring safely and securely.

Listen Here

[podcast player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guest: Intel

Our guest this episode is Tony Franklin, General Manager and Senior Director of Federal and Aerospace Markets at Intel. Tony has worked at Intel for more than 18 years in various positions such as General Manager of Internet of Things Segments, Director of IoT/Embedded Technology Marketing and Communications, and Director of Strategic and Competitive Analysis.

Podcast Topics

Tony answers our questions about:
• (1:49) The importance of location intelligence
• (3:56) Businesses’ challenges with achieving real-time insights
• (6:19) The role digital twins and artificial intelligence play
• (8:50) Tools and technologies necessary for success
• (11:21) Using Intel® SceneScape OpenVINO™, Geti™ for location intelligence
• (17:19) Addressing privacy concerns with AI-powered tracking and monitoring
• (19:49) Future advancements of AI, digital twins, and location intelligence

Related Content

To learn more about the importance of location intelligence, read Monitor, Track, and Analyze Any Space with Intel® SceneScape. For the latest innovations from Intel, follow them on Twitter @intel and on LinkedIn at Intel Corporation.

Transcript

Christina Cardoza: Hello, and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Editorial Director of insight.tech. And today we’re going to be talking about using AI and digital twins to track and monitor environments with Tony Franklin from Intel. But before we jump into the conversation, let’s get to know our guest a bit more. Tony, thanks for joining us.

Tony Franklin: Thank you. Thank you for having me.

Christina Cardoza: What can you tell us about yourself and what you do at Intel?

Tony Franklin: Sure. I am in what’s called the Network and Edge Group, and so we are responsible for all those things and devices that are connected outside of data centers and traditional cloud enterprises, which, again, are usually areas that are in our daily lives. I specifically am responsible for the federal and aerospace markets, but what’s interesting about this particular market is when you think about federal governments, they have every application, every market—they have retail, they have manufacturing—they have, you name it—they have healthcare, etc. So it’s a pretty interesting space to be in.

Christina Cardoza: Yeah, absolutely. And monitoring environments in all of those areas that you just mentioned becomes extremely important. Like I mentioned in my introduction, we’re going to be talking about tracking and monitoring environments with AI and digital twins, also known as gaining this location intelligence. And we’ve talked about digital twins on the podcast before, but in the context of, like a healthcare environment or a manufacturing environment, where you’re stimulating these environments before you’re adding it to something.

So, tracking and monitoring in these situations for location intelligence sounds more real time rather than stimulation. So can you tell us a little bit more about how you’re seeing digital twins being used in this space, and what’s the importance of location intelligence today to do this?

Tony Franklin: Yeah, absolutely. What’s funny is I think we’re used to location intelligence without even knowing it’s there. I mean, we all have maps on our phones. Anybody that’s had children, many times they use the Life360 app and you know exactly where someone’s at. You know how fast they’re moving, you know how long they’ve been there.

I literally was just reading an article last night, and while they didn’t say it, I know they were talking about UPS and how important location intelligence is to just things like sustainability, right? They said 27% of gas emissions in the US are from transportation. And for them one mile—they looked at literally one mile on a map—and it could cost them $50 million if the location wasn’t accurate to get from point A to point B.

And so we are just used to it in our daily lives. I think on the business side, like a UPS, we are starting to understand how impactful it can be financially more and more. And in addition to location intelligence, I think what we’re starting to really understand is time-based spatial intelligence. So it’s not just the location, but do we really understand what’s going on around us, or around that asset or object or person in the time that it’s in.

And so digital twins allow you to recreate that space, and then also understand at the particular time—like you said, we’re talking about more real time, in general, but really adding on to the type of digital twins you were talking about. So, both real time, and, if I need to hit the rewind button and do analysis, I can do that also.

Christina Cardoza: Yeah, absolutely. And we’ll get more into how to use digital twins for these types of capabilities. But you bring up a great point. A lot of these things are already just naturally integrated into our everyday lives as consumers, but then the advances in the technology, the capabilities keep evolving, and businesses want to take even greater advantage of all of these.

So I’m curious, on the consumer side it comes so naturally to us, but what are the challenges that businesses face to implement this and to really gain all the benefits of location intelligence and the technology that helps them get that?

Tony Franklin: Yeah. I think today one of the biggest challenges is just siloed data, and the siloed data coming from having a number of applications. Again, I’ll use the consumer side because it’s easy for us to relate. We have a ton of apps on our phones, but they work on that phone, they work together. It doesn’t mean the data that comes in between the apps works together.

And so in businesses many times I’ll have an app to monitor my physical security, but I’ll have another app to monitor, say, the robots that are going around in a factory. And they all have cameras, they all have sensory data, but they’re not connected. I may have another app that’s taking in all of the weather data or environmental data within or outside of my particular area, but all of this data is sitting in different silos. And so how do I connect this data to increase my situational awareness and to be able to make better decisions? And with better decisions ideally I’m either saving money, I have an opportunity to make money, I’m creating a more safe environment—maybe saving lives or just reducing injuries, etc.

And so I think that’s one of the biggest challenges. I think the other challenge is sometimes a mental shift. Like you said, we’re so used to it on the consumer side, and I do think some of this is changing with demographics. Like I think about my son, where I look at the video games they have, and they are so realistic. And a lot of the technology that we’re using, or that we’re used to, is coming from games. Within games you can see everything in the environment—it’s 3D. I know location, I have multiple sensory data that’s coming in—whether it’s sound and environmental and etc. And all of that is integrated. And so we’re just more and more starting to want to incorporate that into business, our day-to-day lives and business, and use that to actually have a financial impact.

Christina Cardoza: Yeah. And building on that video gaming example, it’s 360—you can look around and you can see the entire environment everywhere. But especially with the data silo I think there’s also just so much data that businesses can have access to today. So it’s the addition of having data in all these different places, being able to connect it, but then also being able to derive the valuable insights from it. Otherwise it’s just an overwhelming load of data coming at you.

Tony Franklin: That’s right.

Christina Cardoza: So how do you see the artificial intelligence and digital twins in this context being able to overcome those challenges, overcome the data silos, and really give you that real-time location intelligence that businesses are looking for?

Tony Franklin: Yeah. What’s valuable about digital twins is it naturally creates an abstraction, right? So you’re re-creating essentially—obviously we know a digital twin is a digital replica of the real world. And so what you’re generally doing analysis on is the replica, not the actual data coming in. So you’re doing analysis on the data coming in, but then you’re also applying it to this digital replica. And so that digital replica now can make the data available to multiple applications.

Now you need to be able to use standard-based technology—so, standard-based technology that allows you to communicate with multiple applications so that data’s available to them—standard-based technology that allows you to apply different types of AI. You may need one type of AI to identify certain animals or certain people or assets; you may need another to identify different cars or weather or more physics-like models. So, understanding the environment better.

So you need an application that allows you to be able to inject that data. And so by having that replica that allows you to expose the data, and it also—from a data silo perspective—it keeps you from being locked into a particular vendor. You can have—what’s interesting is there are applications out there.

I like to use some of the home-monitoring systems as an example. I can buy a door camera or an outside camera, but it’s all within the context of that particular company rather than—oh, I could buy the best camera for outside and I can buy better camera for inside, and I can use a different compute model for whether it’s PC or whatever I want to use—where I can open up and give me flexibility and make that data more available. And, again, with the digital twin that data can come in, I can replicate it, I can apply AI to it, etc., and using the other technology that’s available to me.

Christina Cardoza: So you mentioned using standard-based technologies. Obviously when businesses want to implement AI and digital twins they need some technology, some hardware, some software to help them do this. And I know on the AI side—I’m familiar with Intel has OpenVINO and they have the Geti toolkit. Do you see those technologies being used to gain this location intelligence? And also, what are the technologies available for digital twins businesses to take advantage of to successfully deploy these sensors and capabilities out in their environments?

Tony Franklin: Yeah, absolutely. So, you mentioned those two products. And when you think about the AI journey that customers and developers have to have to go on—so, like you said, there’s a ton of data. So you need to be able to label the data to make the data available to you—whether it’s streaming data, in this case if we’re talking real time. Then you have OpenVINO, which will allow you to apply inference to that data coming in and to use a range of compute technologies—you know, pick the best compute for the particular data coming in.

You then mentioned Geti on the other end, where—well, it’s really on both ends—where I’m bringing this data in, I’m applying inference, I’m continuing a loop, and Geti allows you to train the data, which you can then apply back on the front side for inference when you actually implement it. And it allows you to do it quickly instead of needing necessarily thousands and thousands of images, if we’re talking images for computer vision—you can do it with fewer images, and everyone doesn’t need a PhD in data science. That’s what Geti is for.

And in the middle we have something called Intel SceneScape, which uses OpenVINO. So it brings in the sensor data; OpenVINO will then apply inference so I can do object detection, I can do object classification, etc. I can use Open Zoo, which is a range of models from all of the partners that we have and we work with. Then I can implement that with SceneScape, and then I can use Geti to take this data to continue training and then to apply the new training data.

So, again, it’s a full spectrum. Back to your question about AI—like you said, there’s a ton of data, and these allow you to simplify that journey, if you will, to make that data available and usable in an impactful way—something that’s measurable.

Christina Cardoza: I always love the examples of OpenVINO and Geti. Because obviously AI is a big thing that a lot of businesses want to achieve and do, and they don’t have a lot of the knowledge or skillset available in-house, but with Geti it connects developers and business users together. So business users can help out and be a part of it and make these models, and developers can focus on the priority AI tasks.

But tell me a little bit more about SceneScape, because I think this is one of the first times I’m hearing about this technology—is this a new technology from Intel? And what is the audience—like OpenVINO you have developers, Geti you have business users. What is the end user for Intel SceneScape, and where do you see the biggest opportunities for this technology?

Tony Franklin: Yeah. Like Geti, it’s for the end users, and it’s really a software framework that relies on our large ecosystem that Intel has and that we work with. And so, like OpenVINO and like Geti, it’s intended to simplify making sense of the data that you have, like you said, without needing a PhD necessarily. In the case of SceneScape—if you think of SceneScape sitting in the middle of OpenVINO and Geti. And, again, it definitely uses OpenVINO, but it can use both. It really simplifies being able to create that digital twin; it simplifies being able to connect multiple sensor types and making that data available to multiple applications.

So a simple way I put it is it allows you to use any sensor for any application to monitor and track any space. That’s essentially what it does. So whether you have—obviously video is ubiquitous. We’re so used to video—we’re on video right now, so we’re used to video.

But there are other sensors that allow you to increase situational awareness for your environment. You could have LiDAR, which all of the electrical vehicles and autonomous vehicles have that. You can have environmental sensors, temperature, etc. Some we’ve even heard of things like radiation sensors, sound sensors, etc. Bring all of that data and as well as text data.

If you happen to scan data in some retail locations, they actually want to be able to scan. We know when you go to the grocery store they have all the labels. I want to scan that, but I want to associate it with the location and the environment where that particular food item is.

And then we usually take 3D—whether it’s standard, it could be 2D, 3D maps. So you can do that with your phone; most iPhones today you can take a 3D map of the environment. Some people don’t even know that you can take a really nice 3D environment with your phone, or there public companies that do it, or you can use simple things like Google Maps.

We have our lead developer actually just use Google Maps, and he uses SceneScape for his home monitoring, and he uses whatever camera he wants to use, and he uses AI to tell the difference between, say, a UPS truck versus a regular truck going by. And so, again, that’s AI. So, again, these tools are allowing the end users—and from an OpenVINO perspective and the developers—to just make it easy to implement AI technology in an open, standard way, and leverage the best computing technology underneath it.

Christina Cardoza: Yeah, I love that, because obviously, like AI and digital twins, businesses hear all the benefits about it, but then it can be intimidating—how do you get started? How do you successfully implement this? So it’s great to see companies like Intel that are really making this easy for business users and making it more accessible to start adding some of these things.

You mentioned some sensors that these businesses can add with these technologies. And in the beginning we talked a little bit about the different industries, especially in the federal space, where you can apply some of these. So I’m curious if you had any case studies or industry examples you can share with our listeners about exactly how you’ve seen this technology put in place. What were the results problems, results solutions, things like that?

Tony Franklin: Yeah, sure. I’d say before specific examples, the one common need or benefit that’s available to the customers that have been using SceneScape is they need to understand more about the environment—either that they’re in or that they’re monitoring. That’s one. And they need to be able to connect the sensors and the data and make it available. So generally they already have something, they’re monitoring something, and they want to increase the use of that data and additional data, and, again, let them get more situational awareness from it.

Some examples—you think about an airport. We’ve seen where that’s a common area where we all fly. You go to the airport, they need to track where people are congregating, they need to track queue times. How long are the lines? In some cases, particularly when we were in the early stages of Covid, they need to track some bodily measurements. They may need to track—they have the forehead sensors—when you come into some of the TSA areas they have the sensors and make sure you’re socially distanced at the time. Do you have lost luggage? So you can track has luggage been sitting someplace and no one’s with that luggage for too long?

So that’s another situation where, again, we have a number of sensors you are already monitoring—airports have spaces that they’re already monitoring, but now they need more information and they need to connect this data. This sensor that’s looking at the forehead generally isn’t connected to the cameras that are looking at the queue line. Well, now they need to be; now they need to be connected.

And I don’t need to just look at Terminal A, Gate 2, if you will. I need all the terminals, and I need all the gates, and I need to see it in a single pane of glass. And that’s one of the benefits that SceneScape allows you to do. It builds up the hierarchy, and it really associates assets and objects. So it helps to build relationships between—oh, I see this person and I see they’ve been in line for 30 minutes, but I see that they have a high temperature but they’re not socially distanced. Or I see this person was with a bag and they were moving with the bag, and now the bag stopped but they kept moving and the bag is sitting stationary. So, again, it helps you with motion tracking in the particular environment. So that’s one general example that we all usually can understand is at an airport.

Christina Cardoza: Yeah, I love those examples. When you think about cameras and monitoring and awareness, it’s typically associated with security or tracking. And this is really to not only help with security but to help with business operations. Like you said, like somebody waiting in line, they can deploy more registers or have somebody go immediately over to somebody who’s been waiting too long.

I know one concern that people do have is when they are being tracked or when cameras are on them is just making sure that their privacy is protected. So can you talk a little bit about how Intel SceneScape does this? Like you said, it’s people tracking or behavior tracking, but not necessarily tracking any identifiable information.

Tony Franklin: Right, our asset tracking—and we actually don’t do anything like facial recognition or anything like that—but what we actually deploy, we’re just looking at detecting the actual object. Is it a person, is it a thing, is it a car? And so we want to identify what that is: we want to identify distance, we want to identify motion. But, yes, privacy is absolutely important. So we’re inferring the data but then allowing the customers—based on their own application—they can implement what they choose to for their particular application. And to your point, they can do it privately today or in the future.

One of the use cases I’m still waiting for all of us to be able to implement is the patient digital twin. I have a doctor’s appointment this afternoon, actually. And for anybody that’s been to the doctor, you’ve got different medical records at different places, and all of the data’s not together, and they’re not using my real-time data with my historical data and using that against just reams and reams of other medical history from across many patients to apply to me.

So I would love to see a patient digital twin that’s constantly being updated; that would be ideal. But today we have applications that, maybe we’re not quite there yet, but how about just tracking instruments, just the medical instruments. Before surgery, I had 10 instruments, and I want to make sure when I’m finished with surgery, I have 10 instruments—they’re not inadvertently left someplace where they shouldn’t be. And so that’s just basic.

Where are there crash carts at in the hospital—I want to get to the crash carts as quickly as possible. Or actually where are there those check-in carts, where before you can actually get anything done you have to pay for the medical services that are going to come. They don’t let you go too far before you actually pay. So where are those at? So there are immediate applications today that can help, as you said, with business operations. And then there are the future-state ones, which I think we’re all waiting for, which I want my patient digital twin.

Christina Cardoza: Absolutely. It all comes back to that data-silo challenge we were talking about. I can’t tell you how many times at a doctor I forget to give them my historical information, like my family history, just because I’ve given it so many times I expect it to be on file. And then I’ve mentioned it and they’re like, “You didn’t tell us that.” “Well, I told you last time, or I told my last doctor.” So, definitely waiting to see that also.

And it seems like every day AI and like digital twins, it’s changing. Capabilities are being added; it’s rapidly evolving. So where do you think this space is going to go? How do we get to a place where it is more widely adopted and we do see some of these use cases and capabilities that we are looking for and that would really improve lives?

Tony Franklin: I think it’s coming. I think it’s one of these—I’ll say technological evolutions. I won’t call it a transformation, but evolution that at some point it’s just going to hit a curve. We’re just so used to it. I mean, how many people use Alexa Echo and Apple, Siri and Google Earth. These cars that are driving around have more sensors in them now than they ever had. They’re like driving computers with tires on them, basically.

And so it’s as if it’s happening, but we’re not always consciously paying attention to it. It just sort of—I mentioned to somebody the other day, I said, “I don’t remember ever asking for an iPhone, but I know I use it a lot.” And so now that I have it I don’t know that I could actually live without it. And so I think as companies are starting to realize—wow, I can de-silo my data, and as I make relationships or connections between the data that I have, and between the systems that I have across the range of my applications—not just in one room, not one floor, not one building, but a campus, just as an example—I actually can start to get actual value, and I that can impact my business. Again, my bottom line—I can make more money, I can save more money.

I think about traffic as a use case. It could save lives. One example we talk about often—and we’re seeing customers look at SceneScape with this application—is many cars today, they have the camera sensors. You just think about backup sensors or the front cameras and LiDAR, etc. And most intersections have the cameras at the actual intersections. They don’t talk to each other today.

Well, what if I have a car that’s coming over speed, and I have a camera that can see pedestrians coming around a blind spot. I want the car to know that and start breaking automatically. Right now it breaks for some of the—for most cars; if I’m coming up on a car too fast it will automatically start breaking. It doesn’t do that if I don’t know that a person is coming around the corner and I can’t see it. That’s an application that that can be applied today, and cities are actually looking at those type of applications today.

Christina Cardoza: Yeah. I love all of these examples because they’re so relatable, you don’t even realize it. Like you mentioned the sensors in your car. I go into a different car that’s maybe an older model than my car, and I expect it to have the backup camera or the lane-changing alert, that I’ll just go into the lane, and I’m like, “Oh, that was bad.” Because I rely on technology so much. So I can’t wait to see how some of these things get implemented more, especially with Intel and Intel SceneScape, and how you guys continue to evolve this space.

But unfortunately we are running a little bit out of time. So, before we go, I just want to throw it back to you one last time, if there’s any final thoughts or key takeaways you want to leave the listeners with today. What they should know about what’s coming, or location intelligence, the importance of digital twins, anything like that?

Tony Franklin: Yeah, I’m going to piggyback off of something you just said. You get into the car and, you know, it has the lane change, and you’re just so used to everything around you. But we take for granted that our brains do that. We know how fast the car is going; we know whether somebody’s coming. Cameras don’t just know that. They can see it, but they don’t know how far it is, necessarily. They don’t know how fast the car is going.

And that is why AI and the integration of these technologies and sensor data is so important. It allows now these systems to be more intelligent and to actually understand the environment. Again, time-based spatial intelligence—distance, time, speed, relationships between objects. And that’s exactly what we’re working on.

You mentioned some of the technologies that we have. We’re working with our large ecosystem and community, and so we just want to make it continue to be easy for these companies to implement this technology in an easy way and have actual financial impact on their companies. So it’s an exciting time, and we’re looking forward to helping these companies make a difference.

Christina Cardoza: Yeah, can’t wait to see it. And, like you said, I know you guys work with a bunch of ecosystem of partners in all of these different areas. So, looking forward to seeing how you partner up with different companies to do this and make this happen. So, just want to thank you again for the insightful and informative conversation today. And thanks for our listeners for tuning in. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Partnerships Fast-Track AI-Powered Quality Inspection

Edge AI and computer vision are a natural fit for real-time quality inspection on the production line. With accuracy and speed greater than possible by the human eye, AI and machine vision solutions catch even the smallest of product anomalies.

But it can be time-consuming and costly for manufacturing to develop and deploy these solutions on the factory floor. So how can businesses move faster to gain advantage with the latest technologies? One answer is collaboration with an ecosystem of partners.

For example BlueSkies.AI and Lenovo Group joined forces with Intel to develop secure and scalable quality inspection solutions. And as these companies all partner with Global Solutions Distributor BlueStar, Inc., VARs and systems integrators gain new opportunities to serve their manufacturing customers. BlueStar provides a broad range of services from custom solutions to logistics, along with service, support, and marketing—helping VARs and SIs get to market faster with the complete quality inspection systems their customers need.

Collaboration at Work

BlueSkies.AI and Lenovo partnered closely with Intel to pilot the solution at a large pharmaceutical company during the constraints of the COVID-19 pandemic. On the hardware side, they deployed the Intel® processor-powered Lenovo ThinkEdge SE30 industrial PC, built to withstand harsh environments.

“The appliance is designed to snap onto a conveyor belt and the camera can be chosen based on the product and size of the potential defect,” says Ted Connell, Founder and CEO of Blueskies.AI. In this case, computer vision cameras needed to look at full images and detect defects as small as 0.1 mm on the pharmaceutical tablet line.

The client does not need any AI or IoT skills. BlueSkies.AI developed AInspect, an edge machine vision appliance with an integrated PC, camera, and light that fits on top of conveyors to inspect products. All the client needs to do is to show the system 30 to 50 examples of each defect type, a similar number of good examples, and the system trains its own AI models.

“That’s all the data you need to start with, and we can achieve mid- to high-90% accuracy,” says Connell. “We’ve figured out how to train models with a small amount of data and quickly reach a better level of accuracy than humans can provide. If the client needs higher accuracies, all we need to do is show the system additional samples of each defect type and the AI models improve with each interaction.” As part of its AI-as-a-Service model, BlueSkies.AI oversees initial training to reach the pre-chosen level of accuracy, and provides ongoing support.

Edge Compute Provides Data Security Plus Scalability

Keeping data secure is of utmost importance to manufacturers. In many companies, it’s against protocol for data to leave the factory, making cloud solutions an impossible choice.

“Security is front and center for manufacturers when they’re considering implementation and deployment of a solution at scale,” explains Blake Kerrigan, Senior Director of ThinkEdge Business Group at Lenovo. “This is especially true when you’re blending IT and OT in organizations where there’s been a lot of bifurcation in the past.”

The solution answers that concern with powerful on-premises compute that keeps data at the edge and behind the customer’s firewall. Manufacturers can deploy Lenovo hardware confidently, knowing it meets all their security criteria and quality standards. They also benefit from a trusted supplier program supported in more than 180 markets around the world. “We’re able to leverage that economy of scale and service large global companies,” says Kerrigan.

The Future: Open Source at the Edge

Proprietary machine protocols have a long history in manufacturing, creating interoperability challenges and data bottlenecks that can slow down ROI for connected solutions.

Kerrigan believes an open-source approach is the way forward. “The thing I love about AI, and especially computer vision, is that it’s basically a single language,” he says. “BlueSkies.AI and Intel are leading the way by adopting and embracing this open-source community, which is the way of the future and will lead us to better horizontal strategies from the IT-down perspective.” The Intel® OpenVINO Toolkit, for example, provides an extensive development framework that runs on standards and allows for new innovations in deep-learning applications.

Latency is another challenge manufacturers face, especially as they adopt high-bandwidth solutions like AI machine vision. Processing data at the edge is one way to minimize latency—and alleviate security concerns—as AI and connected things become ubiquitous.

“There’s been a massive consolidation of workloads in the cloud and movement of enterprise applications to the cloud,” Kerrigan says, “and now we’re talking about moving them back to the edge for efficiency and speed.”

Ultimately, though, the future of connected manufacturing will take a holistic view of computing. “It will be about managing data across the entire plane, from edge to cloud,” says Connell. “AI is going everywhere. To support those applications, we’ll need a homogeneous environment and network to minimize latency and maximize security between the distributed compute centers.”

With robust edge compute and data security in place, imagine what AI-as-a-Service could do for your business.

 


About BlueStar

BlueStar is the leading global distributor of solutions-based Digital Identification, Mobility, Point-of-Sale, RFID, IoT, AI, AR, M2M, Digital Signage, Networking, Blockchain, and Security technology solutions. BlueStar works exclusively with Value-Added Resellers (VARs) to provide complete solutions, custom configuration offerings, business development, and marketing support. The company brings unequaled expertise to the market, offers award-winning technical support, and is an authorized service center for a growing number of manufacturers. BlueStar is the exclusive distributor for the In-a-Box® Solutions Series, delivering hardware, software, and critical accessories all in one bundle with technology solutions across all verticals, as well as BlueStar’s Hybrid SaaS finance program to provide OPEX/subscription services for hardware, software, and service bundles. For more information, please contact BlueStar at 1-800-354-9776 or visit www.bluestarinc.com.

Accelerate Autonomous Vehicle Technology Development

Autonomous vehicle technology is poised to revolutionize transportation and logistics.

In the coming years, autonomous vehicles will find multiple use cases. Self-driving taxis will navigate complex urban environments, alleviating congestion in cities and freeing human drivers to work or simply relax during their commutes. Autonomous trucks will facilitate safer, more efficient long-distance shipping. Autonomous shuttle buses will enable mobility and improve accessibility in our communities.

Systems architects and solutions developers are understandably eager to take part in this wave of opportunity. And thanks to the emergence of edge AI computing platforms built to support autonomous vehicle technology, that will be increasingly feasible.

“General-purpose industrial computers are not optimal for initial development and proof-of-concept work,” says Eddie Liu, Product Manager at ADLINK, a provider of edge computing solutions for autonomous vehicles. “But edge computing solutions made for use in autonomous vehicles offer the features and performance needed for real-world application. They help solutions developers overcome technical challenges and move seamlessly toward proof-of-service and mass production.”

Autonomous Vehicle Technology Enabled by Flexible Platforms

The technical challenges Liu refers to are significant. Massive amounts of sensor data have to be integrated and complex real-time calculations must be performed at the edge. There are also difficulties in working with industry-specific communications protocols such as controller area network (CAN) bus—which generic IPCs don’t support—and the need for an underlying hardware platform that can withstand the rigors of driving.

There are serious nontechnical challenges as well. The public is wary of allowing autonomous vehicles on their roads, and skeptical of the safety of self-driving cars and trucks. The social pressure of public opinion will drive stringent safety standards for autonomous vehicles, requiring would-be manufacturers to accept a strict regulatory environment.

Purpose-built platforms solve many of these challenges—and their flexibility provides a clear path from initial concept to proof-of-service. And ADLINK solutions offer several different configurations that can be used at various stages of product development.

“When developers are still fine-tuning their algorithms and aren’t sure exactly what capabilities they will need, they’ll usually want to stack together several vehicle computers to create a quick and flexible proof-of-concept,” says Liu. “Later on, they’ll typically move to a more compact and powerful integrated system.”

Vehicle computers connect to the on-board sensors and use #AI to process the sensor #data and navigate through complex environments in real time. @ADLINK_IoT via @insightdottech

Despite the different configuration options, the problems that ADLINK platforms solve are the same at any stage of development. Vehicle computers connect to the on-board sensors—LiDAR, cameras, GPS, and inertial measurement sensors like accelerometers and gyroscopes—and use AI to process the sensor data and navigate through complex environments in real time.

ADLINK offers important safety features—crucial for overcoming social and regulatory obstacles to the mass adoption of autonomous vehicles:

  • Dedicated safety microcontroller unit (MCU) that monitors the health of the system and, in case of a failure, pulls the vehicle over to a safe stopping place.
  • Redundant power sources for critical system elements such as the perception electronic control unit (ECU), power management integrated circuit (PMIC), safety MCU, and CAN.
  • Ruggedized design that includes anti-shock and vibration features for smooth and reliable operation.
  • Intel® Trusted Platform Module (TPM) to securely store critical data such as encryption keys and credentials to guard against cybersecurity threats.

This combination of high-performance computing capabilities, rugged design, and built-in safety features offers multiple benefits to solutions providers. And that should help encourage partnerships among hardware specialists like ADLINK, solutions developers, and systems architects seeking to enter the autonomous-vehicle space.

Liu acknowledges ADLINK’s technology partnership with Intel as a significant factor in helping the company bring their solution to market. “Intel provides very high-performance CPUs, reference designs, and extensive support, which enable ADLINK and our customers to rapidly develop and deploy autonomous driving solutions.”

From Concept to Proof-of-Service in Japan

ADLINK’s experience with a customer in Japan provides an excellent example of how computing platforms made for self-driving vehicles can shorten development time and speed deployment. The company needed to demonstrate the feasibility of a line of autonomous shuttle buses. After they were validated, the vehicles would be mass-deployed—but there were safety concerns and technical hurdles to overcome first.

ADLINK worked with their customer to design a proof-of-service version for testing. They used an Intel-based computing platform, Autonomous Vehicle Solution, to perform the complex real-time computational work needed to process sensor data and make navigation decisions. Per the customer’s request, they also implemented multiple redundant systems to ensure the safety of the autonomous vehicle.

The proof-of-service trial was a noteworthy success. ADLINK’s customer was so pleased with the results that they have decided to move to full deployment, with plans to roll out several hundred shuttle buses in 2024.

Toward an Autonomous Future

Collaboration between computing hardware experts and solutions developers will be the hallmark of the coming autonomous-vehicle boom. These synergistic partnerships will deliver important efficiency, safety, and accessibility benefits to the world—and will likely generate significant economic growth as well.

In addition, technology developed for logistics and transportation will find use cases in other verticals. “The technology behind autonomous driving solves fundamental problems: mapping, localization, sensing, perception, prediction, planning and control, and so on,” says Liu. “And that means it can be adapted to many different scenarios and use cases.”

The possibilities here are exciting. Autonomous mining equipment will help transport raw materials and facilitate operations in dangerous terrain without putting people at risk. Maritime implementations mean freighters can navigate busy ports and deliver goods on their own. And AI-enabled agricultural machines will be able to plant, fertilize, and harvest crops autonomously.

All in all, the future looks bright for autonomous-vehicle technology. “It’s incredibly promising, because this really has the potential to make transportation and other sectors safer, more productive, and more efficient,” says Liu. “This technology is developing rapidly. We’re going to see more and more autonomous vehicles on the road in the years ahead.”

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Unlocking the Full Potential of Live Video Over Cellular

Demands for live video streaming over cellular networks—from 3G, 4G, and 5G to multi-access edge computing (MEC)—are tremendous. There’s a fast-growing need for high-quality, real-time video streaming for use cases in almost every vertical, including smart cities, retail, utilities, event venues, and more.

But there are several obstacles to achieving reliable, robust, and resilient video over cellular. “Cellular by itself is inherently constrained,” says Kunal Shukla, Senior Vice President of Technology at Digital Barriers, an AI-based technology and solution provider. “There are challenges in terms of bandwidth, congestion, and packet loss.”

Through a combination of video compression and AI technologies, Digital Barriers empowers organizations with the full potential of live video over cellular. “We want to ensure a robust, reliable video stream from point A to point B without sacrificing quality,” Shukla says, “and do it at a disruptive cost.” The company’s technology can compress video by up to 90%, providing tremendous bandwidth savings and reducing the overall total cost of ownership for customers.

A timely example is the growing need for reliable connectivity at entertainment, sporting, and other large event venues where thousands of people come together. Live video streaming and AI-powered data analytics are essential for both the enjoyment and safety of fans, athletes, performers, and staff.

The company’s EdgeVis platform provides venue facilities staff with a behind-the-scenes security and operations backbone—enabling use cases such as crowd management, site security, and more (Video 1).

 

Video 1. Real-time video streaming is the foundation of safe and enjoyable events. (Source: Digital Barriers)

Streaming Video from Remote Roadway Cameras

One example of the Digital Barriers platform in action is with the U.S. Department of Transportation (DoT), which needed to stream video from roadway cameras in remote locations where fiber connectivity is not feasible. In one locale, DoT wanted to bring in video over cellular technology that could transmit via 4G and LTE back to a network operating center, allowing it to monitor all the live feeds without interruptions. “Technology like ours brings reliability, robustness, and resiliency to use cases such as this one,” says Shukla. “It doesn’t matter what your radio conditions or cellular environment is—you will get a stream from your cameras to a network operating center without losing quality.”

EdgeVis software allows the department to run AI analytics on top of the video encoding and streaming, applying logic such as object, people, and wrong-way detection. For example, if a camera senses a stalled car or a collision, the system will relay an alert to the control center. “Remote monitoring and traffic surveillance are important use cases in transportation, and our solution can help reduce operational costs and save capital costs in terms of bandwidth and data,” Shukla explains. “Using the solution, the transportation department has seen a savings of 70-80% in data costs along with improved reliability, reducing the need for costly truck rolls.”

“#Remote monitoring and traffic #surveillance are important use cases in transportation, and our solution can help reduce operational costs and save capital costs in terms of bandwidth and #data.” – Kunal Shukla, @DigitalBarriers via @insightdottech

Construction Site Monitoring and Surveillance

The solution’s unique compression and AI capabilities, along with flexibility to connect via satellite, enable other use cases such as construction site safety and security. Case in point, a UK-based construction company wanted to use video over cellular for remote monitoring and surveillance of sites, for worker safety and nighttime security. The operations team brought in cameras connected to Digital Barriers’ IoVT edge computing platform, which performs real-time analytics and streams compressed video back to the video management system.

“As a result, they’re seeing improvements in operations, lower insurance and other costs, reduction in theft and losses, and better worker safety,” says Shukla.

Building a Secure Ecosystem with Multi-Industry Appeal

Since its inception a decade ago, Digital Barriers worked with federal agencies such as the U.S. Department of Defense, UK Ministry of Defense, and other demanding high-security government organizations. “It was essential for our platform to keep security, privacy, and confidentiality at the center as it was built,” says Shukla. Yet the rapid development of AI brings new challenges. Shukla explains, “There are several ways to approach security for video analytics applications, including encryption, access control, and anonymization tools for facial recognition. Our solution uses all three to ensure security at all layers.”

The solution’s edge-processing equipment also acts as a firewall, preventing less secure hardware, such as cameras, from relaying data to bad actors. “By default, our hardware will only transmit information in one direction,” Shukla says. “That’s a big advantage, because it means we can control rogue devices.”

Digital Barriers relies on valuable partnerships with systems integrators for implementation. “We work with the major SIs, but also with niche players who are specific to verticals such as oil and gas or federal spaces, where only a few people are certified,” says Shukla. “When we bring in the best-of-class ecosystem companies, we can accelerate the time to value.”

The company’s partnership with Intel is instrumental in its evolution outside the defense market. “We’re transitioning from a custom solution to an Intel-based platform [that] can go into manufacturing, retail, healthcare, smart cities, and more,” Shukla says. “It’s opened up a lot more verticals we can enter at a competitive cost point, bringing technologies that drive business outcomes.”

Video as a sensor will continue to drive the growth of video over cellular in the future, and in cooperation with edge analytics and AI will play a key role in digital transformation across industries. “Video over cellular is going to change our individual lives and how we live, work, and operate as a society,” Shukla says. “It’s already doing that, and I see it becoming more important as the future unfolds.”

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

This article was originally published on September 19, 2023.