The Role of Digital Twins in Manufacturing Operations

Today, companies are on a never-ending quest to cut costs and shrink production time. To achieve those goals, they increasingly turn to digital twins to pinpoint process failures before any product goes into production.

While available for years, this virtual-model technology increasingly helps organizations test performance and efficacy by transferring live data from a real-world system to its digital replica. The result: instant visual feedback that elevates risk assessment and decision-making.

In this podcast, we examine the central role that digital-twins technology plays in machine testing, simulation, monitoring, staff training, and maintenance. In addition, we detail how factories harvest mountains of data by using digital twins to pinpoint valuable information and extract maximum value.

Listen Here

[podcast player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guest: CCS Insight and Intel®

Our guests this episode are Martin Garner, COO and Head of IoT Research for CCS Insight; and Ricky Watts, Industrial Solutions Director at Intel®.

Martin’s main areas of interest are in mobile phone usage, internet players and services, connected homes, and IoT. He joined CCS Insight in 2009 and works on the commercial and industrial side of IoT.

Ricky joined Intel about three years ago and since then has focused on ensuring industrial edge solutions are safe, secure, and reliable. Prior to joining Intel, he was at Wind River for nine years and was responsible for the company’s industrial products and solutions.

Podcast Topics

Martin and Ricky answer our questions about:

  • (2:21) The definition of digital twins
  • (5:19) What digital twins means to the manufacturing industry
  • (9:06) The challenges of new digital technologies
  • (12:51) How to successfully implement digital twins within the factory
  • (18:12) What skills manufacturers need to be successful
  • (21:47) Tools and technologies for digital-twin adoption
  • (25:16) The ecosystem of partners making digital twins possible

Related Content

To learn more about digital twins in manufacturing, read CCS Insight’s white paper on the topic. For the latest innovations from CCS Insight and Intel, follow them on Twitter at @ccsinsight and @Inteliot, and on LinkedIn at CCS-Insight and Intel-Internet-of-Things.

Transcript

Christina Cardoza: Hello and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Associate Editorial Director of insight.tech, and today we’re talking about bringing digital twins into the factory with Martin Garner from CCS Insight and Ricky Watts from Intel®. But before we jump into the conversation, let’s get to know our guests a bit more. Martin, welcome to the show. What can you tell us about CCS Insight and your role there?

Martin Garner: Thank you, Christina. Well, so I work at CCS Insight. We’re a medium-sized and quite fast-growing analyst firm focusing on technology markets. I lead the analyst research. We do industrial IoT, and I did a big piece of research on digital twins a few months ago, which is obviously part of what we’re talking about. And that is already available to Intel’s partners through insight.tech. So, at CCS Insight, I’m also COO.

Christina Cardoza: And, Ricky, thanks for joining us today. Please tell us more about yourself and Intel.

Ricky Watts: Thanks for the introduction and, yeah, Ricky Watts. I am in Intel. I work in our Federal and Industrial business unit, FIS, as we call it here. I look after the industrial solutions. So, think of me as looking at Intel Technologies, and how do we apply them into the industrial landscape, industrial market segment. So, probably everybody knows Intel makes chips for platforms for the industrial landscape. So, I’m looking at how do we take those chips and work with them as we look at our partners bringing through solutions, including digital twins, which we’re going to talk about today as well. So, it’s really about holistically matching Intel technologies into those marketplaces.

Christina Cardoza: Great to have you both today. Like I mentioned in my intro, we’re talking about digital twins and manufacturing, and digital twin is not necessarily a new concept, but it is something that’s gaining a lot of interest recently. So, Martin, I’m going to start with you for this question. Since it’s not a new concept, but I do think that there is a little bit of confusion on what digital twins are and there may be many definitions out there, so can you explain to us what a digital twin is, especially in the context of a manufacturing factory?

Martin Garner: Oh, for sure. And I completely agree. There are lots of definitions, and I think maybe Ricky will have a slightly different variant on mine, but I like the view from the Digital Twin Consortium, the industry body around these things, which is that it—a digital twin is a virtual model of things and machines that exist in the real world, but also processes that are in use in factories and things.

But, as well as that, there needs to be some sort of synchronization between the real thing and the virtual model. They need to keep each other up to date. Now, that could be real time or it could be much slower than that. And of course you need to know how frequently, and what’s the quality of the data that’s being synced between them. Now, that all sounds quite simple, but actually there are lots of layers going on within this. So, there’s identity, there’s a data feed, there’s permissions, there’s models of the data and of the machine. There’s roles and policies and analytics and AI. So there’s quite a lot going on inside that.

Now, for manufacturing what this means is it’s very easy to think of a sort of James Bond style diagram on the boardroom wall, where you can see the live state of all the operations in one view. Actually you can fit that to the different roles that exist—so the CEO gets one view, but a maintenance engineer gets a completely different view because they need different bits of information. More than that, you can kind of go on and analyze processes and machines and some of the materials being processed. You can look at wear rates. You can do predictive maintenance, process modeling, and optimization. There’s really a lot you can do here.

One of the uses I really, really like here is that you can do software testing and simulation. So, you can do a software update on the virtual machine first, validate it, make sure it doesn’t crash or break, and then download it to the real thing. You can also do staff training on machines without letting them loose, again, on the real thing.

So, I think it’s really a kind of interesting set of uses, and we’re now starting to think about the grander, scaled-up vision from here on. Because many factories are—they’re the hub of a very large supply chain. So why not have a digital twin of the whole supply chain, the upstream bits and the downstream bits, and get a much more joined-up view of what’s going on? And that might include a machine that you’ve supplied to a customer so that you can see how that’s working. Tesla does that with its cars, and I think if you’re doing as-a-service—if you’re a supplier-as-a-service thing—you need digital twins so you can see what’s going on properly and support it well. So that’s how I look at digital twins and how they fit basically into manufacturing.

Christina Cardoza: Great. Excited to dig in a little bit more about those opportunities digital twins are bringing to the factory. Ricky, are you aligned with Martin’s definition of digital twins?

Ricky Watts: I think Martin had a really good overview of what a digital twin is, and I completely agree with everything that he said. I think the one thing I would add, Christina, is if I think about digital twins and what has brought about—why digital twins exist, and where does a digital twin come into the picture, and really it relates to data. So, manufacturers, and they are creating enormous amounts of data from their machines. Now, that data is being taken off and analyzed and looked at to what’s going on with the machine, what is the data telling me? And I think, as Martin mentioned, in terms of what a digital twin is—which is a digital representation of the machine—it really starts with that data.

So, I would just add to Martin and say, as we’re starting to see the rise of more and more data coming out of the factory, what we need to do is be able to intelligently understand that data before we apply it. And I think that’s the real key, is as I’m taking data off and I’m making some sort of assumption based upon that data, I’m analyzing that data, how do I analyze that data? Well, really that’s what a digital twin is, to some extent. It’s a way of representing the data as it’s coming out of the machine. Make some assessment of that and understand it before you go and apply an output or an outcome of that data.

So, I look at that as kind of like the source of where we’re coming from, and it really relates to AI and all of the things that back up what goes into a digital twin. So, again, I just add that little bit to what Martin said in terms of the digital twin, and I think that’s why it’s important. Now the other thing that Martin mentioned, and I think this is really key, is we often think of digital representations of data, really, at a machine level, or at a very small part. What we’re now starting to see is the evolution of these things coming together. So, as more and more data becomes available, more and more digital twins come available, what we want to do is we want to say, well nothing works in isolation, to some extent, everything works with somebody. We work together. Machines work together to create an outcome.

So, that’s really where you start to see supply chains and things coming in as well. And I think what we’re starting to see is the next evolution that comes even beyond that is really going right through your supply chain to your customer and really understanding: Do I have a digital twin of the person who’s buying from me? Do I have a digital representation of the person who’s investing in me? What is it that they need? I mean, Martin talked about a CEO view, a CTO view, an engineer on the shop floor. Well, I’m an investment person investing in that business. I want to see a digital twin to understand what’s coming out—that represents the value that I’m putting in through my investment. And I think that goes on now.

So, I talked a little bit about the future, but, again, bringing it back to where we’re at right now, digital twins really allow us to analyze and use that data in a way that makes sense for manufacturers to apply that data into their own particular environment. And I think, Martin covered that pretty well.

Christina Cardoza: So, you brought up a couple great points, Ricky, especially around the data—how it’s helping manage and collect all of this data. I know data management has been a big problem in manufacturing with all of the influx coming from these devices and having to manage it all. And so you also mentioned artificial intelligence and machine learning. While all of these are bringing great opportunities to the factory, they’re also bringing a lot of challenges too. So, Ricky, can you talk a little bit more about the challenges that come with these new digital technologies?

Ricky Watts: Well, we could be here for a long time on that one, Christina, but I’ll try and summarize as best as I can. Manufacturers are not generally people that really understand AI and machine learning. So, I think that’s one of the challenges. We do have a skills gap to some extent. Manufacturers make things. They make widgets or they make whatever comes out their factory, cars, etc., etc. So, when we’re introducing these types of technologies and we’re using this type of data, we’re getting into a completely different skill set. Data scientists come in—that’s a term you’ll hear—so, how do I analyze that data? What are the algorithms that I need to apply to that data to make sense? And that kind of goes back to the digital twin, to some extent, which is a representation of that.

So, in a sense, that’s one of the key challenges for manufacturers today, is how do I implement something like that within the workforce that I’ve got today that makes sense? So there’s one challenge right away. The other thing, of course, is this is all relatively new. How do I trust that data? What does trust mean on that data, and how do I apply it? So, there are a number of challenges—security, all these things start to kind of evolve into this as you start to bring in more and more compute at the edge.

And I think those are some of the things that I know we’re certainly working on here in Intel, is to how do we simplify and make this easier for manufacturers to consume some of these technologies? So, building partnerships and ecosystems to bring in the infrastructure. What we are really looking for, very simplistically, is a plug-and-play type of approach. I want to be able to take something in relatively simply, put it in there, generate the data, collect the data, put it into a digital twin, and get some sort of outcome that’s relatively simple. I’ve said words there, Christina, that honestly there’s a lot that goes on behind the scenes to do that.

But I think those are some of the challenges that we have with things like AI and ML as well. And many manufacturers tell me, this all sounds great, but I don’t know how to do it. So, it makes—it’s really difficult. I mean you imagine if you are a small manufacturer in India making something, you’ve got to appropriate these things to the type of manufacturing environment as well. We’re not all a super-large car manufacturer, as an example; there are small, medium businesses that represent a huge amount of the—what we would call the industrial footprint, and they do not have the definition and the scale. So, bringing also scale into those solutions as well. Scaling a digital twin for a car manufacturer versus scaling a digital twin for somebody who makes screws for a car manufacturer is a different thing.

So, again, scale, complexity—simplifying these things out and then making sure that we empower not just manufacturers, but we need to empower an ecosystem to be able to go out and service these things as well. We need people that can go out and install them, work with customers around tuning the models—as you mentioned—machine learning. You’ve got to create the models to do that.

So, there’s a whole ecosystem of opportunity here around this that we’ve got to promote and work with it. So, hope I’ve given you some of the clues. There’s many more challenges as well, but these challenges are being overcome as we work together with a very powerful ecosystem to do that. I’m sure Martin has some similar views to me on some of these things, and the way he sees them—he sees a lot of this stuff going on as well.

Christina Cardoza: Absolutely. And given all the challenges you mentioned, Ricky, and you knowing that it is hard to get started, Martin, I’m wondering if you can talk to us about how to successfully implement digital twins in the factory—not only in the short term, but how do you make sure the short-term benefits you’re getting also will go long term, and you can future proof your investments today?

Martin Garner: Well, it’s a really good question Christina. And as I think just building on what Ricky has just said, I think it’s quite clear that digital twins—at least the full vision of digital twins—is quite a long-term project across both OT and IT. And it’s a really big part of digitalization of a factory’s operations, and so on. So, it’s really not a quick fix for anything.

And one of the other challenges is that the current economic climate is a bit—a bit not very happy at the moment. And so, in that climate it may mean that some companies hesitate to step into a bigger long-term project where they’re perhaps not comfortable, they don’t really know where they’re going with it, and they may feel that now is not the moment to make that step. But actually there are some very good uses of digital twins that have really good payback times, and I’d call out predictive maintenance and software testing as two of the key ones. I’m sure there are more.

And so, for anyone who’s interested in using digital twins for short-term gain, it can be done. And the trick is to make sure that you get a properly architected system, which is open enough to build up, as Ricky said, the ecosystem, and plug in other machines and expand towards the fuller vision, and then build that out progressively when you are ready to do that. But go for the short-term things first.

Ricky Watts: Yeah, and I think, Christina, just add to mine, I think you’ve raised a really important point. I think the world is going through a number of challenges right now. I think we see them everywhere, depending where you are in the world—whether it’s oil and gas industries and the challenges of the energy market; whether it’s geopolitical issues; whether it’s whatever it is—there’s many issues that—well, I think we’re all seeing right now, and I think when I look at digital twins and this word and this concept around what we’re doing, it’s great, but you’ve got to get practical. What can I do with something today that’s going to give me a benefit tomorrow—not next week, not the week after, not next year.

The world can be full of hype, we know, so there’s a lot of potential in everything that we can do. I see it, but we’re technologists. We see this huge potential. These manufacturers, they’re very much focused on how they’re going to survive, probably in a very tough fiscal environment for the next few years. So it really is around getting very tactical.

Predictive maintenance is a great one. Let’s take that and unpack that. If I’ve got a digital model which represents something of my machine, and my machine is making something in the factory—shoes for example, I don’t know, sticking things together—if I’ve got a digital twin and that digital twin is telling me my machine is going to fail, then I can go fix something before it fails. If I do that, I keep my factory operating. I’m producing my goods for longer periods of time, which makes me more competitive.

So, I think as we look at digital twins and the breadth of digital twins, I think that’s great, great vision, great views, and we as industry people will continue to drive that, because we understand what that future represents as we do that. But right now, I think focusing around the needs of manufacturers today, try—“KISSes” it. They call it the KISS principle: “keep it silly stupid.” So—simple, stupid, something like that.

So, I think that’s really interesting, which is, let’s make sure that we bring these things in, do it holistically, focus on something that’s going to add near-term value to a manufacturer, and that’s two benefits. One, is it solving a problem for today? And the second thing, going back to what I said earlier, was it allows manufacturers to themselves start to learn as well and become skilled. But they do it in a simple way. We’re not trying to introduce this huge idea to them that’s very hard for them to consume. They don’t have the skills. Here we say here’s a very small footprint; it’s doing something in your factory, and you can tool yourself up. It gives you a near-term outcome to address some of your near-term challenges, but also long term it allows you then to expose your workforce to the use of data in those environments.

So, I think, as Martin said, I think actually this is going to represent some opportunities in the near term that actually could benefit us in the mid- to long-term with the progress toward more expansive use of digital twins.

Christina Cardoza: I love that point you made about introducing this slowly to manufacturers so that they can build up their skills over time. And I know in the beginning you mentioned you have to make sure you have the right skills and the right people to make this possible. I think every industry, especially in manufacturing, is facing the global skills shortage—trying to find people to do this, people with the right skills to do this. So, Ricky, can you expand a little bit on that? What are the skills and the people that manufacturers need for this, and how do you slowly introduce this to them so that they can start getting those skills?

Ricky Watts: Yeah, I’m going to answer that in two ways, Christina, as well. So, you know, we mentioned some of the skills that—what we call data-driven skills, data scientist skills. But I think we recognize as an industry that that’s almost an impossible task to some extent. You can’t create data scientists en masse. So, what can I do to effectively turn maybe a process-control engineer that’s working in an oil and gas or a chemical plant, how can I turn him into a data scientist without having to do 10 years of training and repurposing him and doing—creating all of that tooling?

So, we are creating tools and capabilities in the background so that actually what we can do is almost in a way repurpose the skills of that process-control engineer and say, “All you need to do is tell me what’s good and what’s bad in something, okay? Use your skills that you have to tell me what’s going on.” And then we can apply that to the data models themselves and the digital twin models in a very simplistic way.

So, in a sense I think skills are going to be needed; you’ve got to get more orientated around IT and OT skills from a compute perspective: How do you install compute? How do you look after and you work closely with your IT departments? But let’s not lose the benefit of what those process-control engineers or those engineers are doing. They know the outcome. They know when something’s wrong in their manufacturing. They have those skills in their brains. So, what we want to do is be able to take that out and turn that into a digital model without having to create them as a data scientist, so we’re creating tools and capabilities to be able to do that.

So in a sense we need them to retool themselves around obviously modern architecture and modern ways of doing things, but actually at the same time we are going to give them the tools to use the skills that they have today so that they can apply that in a much more simplistic fashion, so that I don’t have to effectively retrain this person for 10 years. I can just say, “Look, go onto this terminal in your manufacturing environment, identify something good, identify something bad, or tell me when you hear this noise, or when you do this something’s going to go wrong with that machine.” We can translate that into compute code that sits inside the digital twins and our models and our AI, that then recognizes that as an issue. And then, of course, once you give the machine the data or clue, it can then go in and start doing it.

So, you’ve got two angles to that. One is obviously keep reskilling people in terms of the compute skills that they’re going to need. But at the same time how do we translate the skills that they have and make that easy to consume for the models? So, there’s two areas that we’re doing that and actually being very effective. You’re going to see a lot of progress, particularly on the second one, where what we really want to do is effectively repurpose the skills that they have, but be able to bring them out and create those digital models that we’re looking for.

Christina Cardoza: So, I’m interested in learning more about the tools and technologies that we just talked about that’s making this all possible. And we introduce a lot of new digital tools in the beginning, bringing new opportunities to manufacturers as well as challenges. And I think we as people just have a tendency to jump on the latest and greatest new technologies that come out. But, Martin, I’m wondering, what do you think is really necessary to make this possible? How do you leverage the existing infrastructure and technologies you have, and what really are the new tools and technologies you need to be implementing?

Martin Garner: Well, yeah, and so Ricky has talked about a good number of them already, especially the use of the data, the analytics, and the machine learning heading towards AI. I think there’s a couple I’d just add in to that part of the discussion. So, I know from doing research in this area that one of the bits people find hardest is just getting the data feeds organized and set up in a way so that the data is compatible. And it’s because different sensors and different machines present their data in a whole variety of different ways. Even the same manufacturer presents data in a variety of different ways because they weren’t expected to have to be compatible, and silly things like time stamps become amazingly challenging to organize properly.

And that’s really why it’s better to start small and simple, get the hang of something, get some value from it in one small area, and then progressively build it out from there. The other bit which I just add in is that factories are complicated things. They have a whole range of different technologies and machines, and different technologies are all levels of the technologies stack—from the connectivity all the way up to the AI. More than that, they have a unique mix of machines, so it’s very hard to do any kind of templating. And what that means is that for a larger system we may well need quite a lot of systems-integration work to really get the value out of it. So that there is quite a lot. I think you can start small and start getting value, but it quickly becomes a bigger project as you look to scale it up.

Ricky Watts: We’ve got to look at how are we creating uniformity around the machines as well. We’ve had what we would call appliances in manufacture—so, vertical machines that do certain things. One of the things that we are doing here is working with the industry to create abstractions. So, effectively what we’re trying to do is create uniformity through standards that exist inside the factory. So, using universal languages such as OPC UA so that we use a universal language for the machines to talk to each other so every machine understands every other machine to some extent.

It’s a big job. We create a data model of that machine that exists so that we can understand and categorize that in the way that they talk together as well. And that’s really about some of the abstractions, and you’re starting to see the industry move forward—particularly in the process industries—where they want to create this uniformity because they recognize some of those challenges.

Martin Garner: And the great thing about doing that is it turns it from a sort of stove-piped, vertical, proprietary thing into much more of a sort of horizontal platform approach, which can be—is much, much better for building out scale across manufacturers, supply chains, different sectors, and so on. It’s just an all-around better approach isn’t it? If we can get there.

Christina Cardoza: Absolutely. And you mentioned OPC UA. We’ll be talking to the OPC Foundation on a later podcast on this topic, getting industrial communications and these devices to talk together. But I know Intel has been a huge leader in making some of this possible—democratizing the tools and the technologies for domain users and ensuring that machines and devices can talk to each other. So, Ricky, can you talk a little bit about the Intel efforts on your side: how the company has been working to make digital twins successful along with some of those partners you just mentioned?

 Ricky Watts: Yeah, no, and thanks, Christina, that’s a good question. So, you mentioned—first thing is look—what Intel does extremely well, in addition to obviously building those wonderful compute platforms that run these things at the edge of the network, is we look at scale and creating standards. So, one of the things that Intel’s doing particularly within our industrial business unit is—you mentioned OPC UA. So, we’ve been part of those foundational efforts working with the industry partners to create coalitions of people to identify how do you create this standard?

We’ve been doing that in the oil and gas industry around what they call the OPAF, the Open Process Automation Forum. So, this is creating that abstraction, that horizontalizing that Martin talked about. And then, what are the things that we need to do within OPC UA, that language. A language is only good if it allows you to communicate in the ways that machine wants to communicate.

One of the areas that’s very important in manufacturing is real time. We call that time-sensitive network. Now, in legacy things we’ve had legacy protocols that do that. Well, those legacy protocols really are isolated. So, a machine becomes isolated to create that type of environment, that real time. It can’t interwork with some of the other machines because of the nature of that machine. Well, we’ve been working with OPC UA to extend OPC UA to cover real time, so that you can do real-time communications in a single network environment. You’re not going to lose your real-time capabilities. The language supports that.

So, what has Intel been doing? It’s been looking at the compute platforms, making sure that we’re bringing through the technologies that are going to be needed for those manufacturers. So we’ve been adding things such as TSN, time coordinate, and compute. You go back to something Martin talked about, time stamping. Well, we need the compute platforms to be synced on atomic clocks so that a data on one platform is linked to a data on another platform, and those are completely linked in sync. So that’s time-coordinated compute, functional safety—many other things that we’ve been doing in our platforms to bring through those capabilities.

As we bring them through, we’re enabling the software ecosystem to be able to use those capabilities, okay? So, making sure we validate with Linux operating systems, with Windows operating systems, with virtual machines, with Kubernetes—all these wonderful things that you hear which are basically software infrastructure layers that allow us to run the applications and services from companies like Siemens. So, the PLCs, the soft PLC, the SCADA systems—all these wonderful languages. So that’s that abstraction, the compute at the bottom with all of the functional companies and the security components and enabling that ecosystem in the middle with the software vendors so that they can pick up that capability.

And then working with what I would call the application, or the OEM vendors, so that they can run an application and they’ve got the ability to identify through APIs, through the capabilities, through the calls that they can actually run those applications with what they want, and we can take the data off. So we’ve been: compute platforms, scaling with ISVs, creating what we would call working teams, and creating industry forums to create scale, building standards, applying those standards, and then of course working very much with the end-user community to make sure that we’re not creating Frankenstein’s monster.

Christina Cardoza: Well, I’m excited to see what else Intel comes out with and how you continue to work with your partners. Unfortunately we are running out of time, and I know there are tons of different areas and topics in this one space that we could touch upon that we just unfortunately don’t have enough time to today. But before we go, I just want to throw it back to each of you for any final key thoughts or takeaways you want to leave our listeners with today. So, Martin, I’ll start with you.

Martin Garner: Yeah, sure. And I think Ricky mentioned already the full vision of digital twins, and we have seen from some people sort of planetary-scale weather and geological systems which help us understand global warming and things like that better. Against that, there is a huge number of smaller companies who really don’t know where to start. And I think we’ve already touched on this—we need to make it easier for them and obviously worthwhile to invest in this. And that means really focusing on the shorter term: how to save money using digital twins in the next quarter, and also how to make them easier to buy and set up. Could it be just a tick box on the equipment order form, could it be as easy as that? And then a little bit of configuration when you get the machine, and it all arrives. The vision is one thing, but we need to pull along the mass market of people who might use them as well. And we can’t just do one or the other. We need to do both.

Ricky Watts And I’m going to throw few on that to what Martin said as well. I think he’s absolutely spot on. My final thoughts are—is, yeah, there’s huge amounts of things that you can do going forward, but let’s focus on the things that you need to do in the near term, and really don’t hesitate to reach out and start your own journey. But keep it small, keep it simple. And we are working around those things such as predictive maintenance. So, getting into this, starting to understand what you can get benefits from, but keeping it small, keeping it contained. And we do have solutions that are available to start you on your journey, but really very much focused on what your problem is today.

Christina Cardoza: Great points, and it’s been a pleasure talking to both of you today. Thanks so much for joining me on the podcast, and thanks to our listeners for tuning in. If you like this episode, please like, subscribe, rate, review, all of the above on your favorite streaming platform. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Safeguarding Industry 4.0 with Next-Gen OT Security Solutions

Cybersecurity is a high priority for every business nowadays. But despite improvements in IT security, the operational technology (OT) used to monitor and control industrial processes is often dangerously unprotected. Over the past couple of years, the United States Cybersecurity and Infrastructure Security Agency (CISA) has issued multiple public warnings about various cyberattacks that put OT systems at risk, with rising ransomware threats to operational technology assets being top of mind as of recently.

As manufacturing digital transformation efforts accelerate, the problem will only worsen—making Industry 4.0 technology a tempting target for cybercriminals, hacktivists, and even the militaries and intelligence agencies of nation states. But next-generation industrial security appliances may offer a solution to the unique challenges of OT security.

IT/OT Convergence: Synergy or Cyber Risk?

Digital transformation initiatives have filled the modern factory with AI and IoT technology: a multitude of smart sensors that collect data from the manufacturing process in real time. The result is that historically “unintelligent” OT environments now generate a wealth of useful data—data that can be shared with IT networks for reporting, analysis, and process optimization.

This merging of IT and OT networks is known as the IT/OT convergence, and the business case behind it is clear, according to Calvin Ma, Product Manager at NEXCOM International, a manufacturer of network and industry 4.0 solutions. “Companies gain greater control over their manufacturing process. And customers can see inside the factory, giving them better insight into progress and quality,” he explains.

But in addition to the benefits, IT/OT convergence brings significant risks. After all, a smart factory is a factory that is connected to the internet—and this exposes OT networks to attacks. That’s a serious problem, since OT is notoriously hard to protect because of factors like legacy equipment that simply can’t run security software, as well as the questionable security practices of OT vendors.

As #manufacturers shift to an Industry 4.0 model, threats to OT networks are likely to increase. But modern industrial #security appliances will provide an effective and affordable way for businesses to defend themselves. @NEXCOMUSA via @insightdottech

In addition, joining a secure IT network to an OT network introduces problems of its own. “When everything is connected,” says Ma, “cybersecurity events that would have been easily contained on the IT network can now spread to the OT network—and OT networks are relatively fragile.”

But an expanding OT attack surface is an unacceptable risk for manufacturers.

The Challenges of OT Security

One of the surprising things about OT security, given the well-known difficulties, is how similar it is to IT security.

The cyber threats to OT networks, for example, mirror those faced by IT networks: ransomware and viruses, hacking and backdoor software, worms, and botnets. And the basic solution to OT security is like the approach used on the IT side: Monitor network traffic for suspicious data packets, segment networks so that malicious packets can be contained when they are detected, and place critical assets behind extra layers of protection.

Why, then, is OT security so challenging?

A big part of the problem has to do with the technical limitations of OT endpoints. “Many of these systems were not designed with security in mind,” says Ma, adding that legacy OT assets in factories often run on nonstandard or archaic operating systems, making it “impossible to install security software on them.”

Another challenge comes from the business culture of industrial facilities themselves. The KPI that matters most to plant managers is productivity. And downtime, however reasonable the justification, is expensive. Convincing leadership to take a network offline to upgrade security measures—or asking them to implement a solution that will require regular network outages for maintenance in the future—is a tough sell.

But this leaves manufacturers with a difficult choice. Should they accept costly downtime to improve OT security, or roll the dice and risk a total shutdown later on?

Obviously, neither option is a good one. But a new breed of industrial security appliances—rugged, flexible firewall devices designed to meet the needs of factories/plug-and-protect—may offer a way out of this conundrum.

OT Security with Less Downtime Eases Risk Management

NEXCOM’s Hwa Ya Plant implementation is a case in point.

Hwa Ya is NEXCOM’s smart manufacturing demo site—and also a working production facility. As such, it has all the usual physical challenges of factories:

  • a large footprint with many different types of equipment in constant operation
  • a harsh environment with high temperatures
  • cramped, hard-to-access spaces that complicate device maintenance

To secure the OT network at Hwa Ya, NEXCOM used its own ISA 140 industrial security appliance. Multiple units were deployed at key points throughout the facility to establish a micro-segmented OT network. The eSAF cybersecurity software package, developed by OT security specialist TMRTEK, was installed on the devices, allowing them to monitor and inspect OT network traffic in the same way that traditional endpoint security software does on an IT network (Video 1).

Video 1. Industry 4.0 uses solutions like ISA 140 for micro-segmentation and packet inspection to overcome OT security challenges. (Source: NEXCOM)

The result was a well-secured OT network with good visibility. But perhaps just as important, the Hwa Ya deployment demonstrated the business benefits of modern ISA 140 in a factory setting.

ISA 140 is compact and easy to install, so implementation doesn’t entail costly shutdowns or extensive infrastructure upgrades. And once in place, an out-of-band (OOB) remote management feature and bypass functionality allow OT security personnel to maintain the devices without disrupting the network.

Ma credits many of these benefits to NEXCOM’s technology partnership with Intel®: “The Intel Atom® processor that we used has built-in OOB functionality, which let us develop features that would minimize downtime without having to enlarge our circuit design.”

In addition, says Ma, the Intel chip was a good fit for the physical challenges of a factory setting: “The CPU is high performance, very reliable, and rated for extreme temperatures: perfect for Industrial control system (ICS) security.”

The Future of Industrial Cybersecurity

As manufacturers shift to an Industry 4.0 model, threats to OT networks are likely to increase. Bad actors are as eager as any enterprise to take advantage of a market opportunity. But modern industrial security appliances will provide an effective and affordable way for businesses to defend themselves.

And in the years ahead, as OT networks grow more complex and diverse, manufacturers will also have access to security equipment purpose-built for distinct, real-world use cases. “We’re going to see a trend toward specialization in OT security,” says Ma, whose company is currently expanding ISA 100 Series product line to enhance it with appliances specifically designed for wireless (ISA 141) and switch (ISA 142) networking security in OT.

“Sooner or later, everything in the factory is going to be on a single network. But with advances in industrial security technology, businesses will have the tailored solutions fitted to the various OT scenarios they need to make that network truly zero-trust—ensuring a secure future for industry 4.0,” says Ma.

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

This article was originally published on October 3, 2022.

Reinventing Smart Stores as a Medium: With VSBLTY

Fact or myth? Brick-and-mortar stores are dead and buried. Given the recent trend toward online shopping, it must be true, right? Wrong. Believe it or not, it is a myth.

Brick-and-mortar stores are alive and well. And they are only improving as they undergo a digital evolution and search for new ways to create “intimate” in-store customer engagements in a world that increasingly embraces the convenience of online shopping.

In this podcast, we take a close-up look at how offline and online are merging—and how this developing omnichannel relationship will impact consumer engagement going forward. Plus, we examine how retailers are successfully sifting through massive amounts of customer data to hyper-target messaging and drive in-store sales.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guest: VSBLTY

Our guest this episode is Jay Hutton, Co-Founder and CEO of VSBLTY, a retail technology solution provider. At VSBLTY, Jay works with retail customers to realize the store as a medium and help brands drive impressions at the point of sale.

Podcast Topics

Jay answers our questions about:

  • (0:00)Intros
  • (2:56)Evolutions in the physical retail space
  • (4:18)The Store as a Medium movement
  • (7:20)Creating a complete omnichannel experience
  • (9:14)Benefits of Store as a Medium from a customer perspective
  • (11:13)Successfully undergoing a retail transformation
  • (13:23)Ongoing Store as a Medium collaborative efforts
  • (15:57)The IT and technology investments for Store as a Medium
  • (18:12)Store as a Medium customer examples
  • (27:05)Final thoughts

Related Content

To learn more about ongoing retail transformations, read Retail Digital Signage Gets an Upgrade with Computer Vision. For the latest innovations from VSBLTY, follow them on Twitter at @vsbltyco and LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Associate Editorial Director of insight.tech, and today we’re talking about retail stores as a medium. Here to tell us more about what this means is Jay Hutton from VSBLTY. Hi Jay, welcome to the show.

Jay Hutton: Thank you, Christina, it’s my pleasure.

Christina Cardoza: Before we dive into the conversation, I want to learn a little bit more about you. So, what can you tell us about your role, and the company VSBLTY?

Jay Hutton: Well, me personally, I’m a 25- 30-year-long suffering tech entrepreneur, serial startup kind of guy. I’ve worked for large companies, but I’ve also, I mean, I feel most comfortable when I’m digging the ditches of small companies. That’s my place and my role in this world. I’m the founder and CEO of VSBLTY going back to 2015, and we scanned the horizon of the tech space at that time, and we tried to figure out a place where we could provide meaningful growth and progress in a suite of software. And we identified at that time that the digital-signage domain was about to explode. Thankfully, we got that one right. It was, and the traditional players had really atrophied, Christina. They hadn’t evolved their product. And so we felt there was an opportunity to come in with the best practices and really capable software that was able to perform in a way that satisfies and addresses a pain.

That’s often what you’re looking for as an entrepreneur. You’re looking for a problem to be solved, a piece of pain that can be addressed. And we then got gifted with the idea of computer vision, bringing to digital signage the ability to measure consumer experiences. So that is the creation of the company. We’ve been in business now for about seven years and still consider ourselves to be in the startup domain, although, honestly, we’re probably more in the emerging—if you could discern the difference, I suppose, startup is the initial phase—emerging as the phase that comes after that, however long that takes. Yeah.

Christina Cardoza: I know you were joking maybe just a little bit when you said you’re long suffering in this space, but I can see how it could be a hard job to do or to take on, especially in the retail industry. We’ve seen so many transformations happening little by little with self-checkouts, but now, over the last couple of years, stores are having to transform even more and compete. Customers have this new expectation, they want convenience, and with online shopping they can get that without even leaving their home. So that’s having physical stores having to rethink: “How do we get people in the stores and make things more convenient for them?” So, to start off the conversation, I’d love to hear your thoughts on the recent trends in the retail space, and how physical stores have been able to compete with online shopping.

Jay Hutton: Physical stores are not dead, nor are they dying. There’s a modification of our consumer behavior for sure, which is resulting in some amount of commerce to be fulfilled through an online presence. But that doesn’t mean the store is dying; it’s evolving. And really, frankly, the pandemic has caused retail to really look at their consumer experiences, the customer journey, and modify it in a way that—I don’t want to say it’s more like online—but it’s more like online; it is delivering to the customer immediate response, immediate engagement with brand in a way that the brands value and the consumers value. So there’s this merging of online with offline in a way that is requiring the store to be reinvented more digitally embracive, more consumer engagement, more consumer-centric, which I think is a challenge to a lot of traditional retailers. But I’m delighted to report that they’re really stepping up to the challenge in all ways, I think.

Christina Cardoza: And as part of that evolution—we teased in our intro—stores are becoming more of a medium, and I know this is a mission of the company, is to transform the retail stores—stores as a medium. So, what do we mean by that, and what goes into this new transformation?

Jay Hutton: The store has always been a medium for messaging. In the past, that has taken the form of poster boards or stickers on the floor in front of the Tide detergent. We’ve all seen it. And it has been meaningful in the way it has redirected brand spend. Brands spend money to drive impressions at the point of sale, at the moment of truth where you are most likely to be influenced by a message. What’s different in the last two or three years is how all that’s becoming digital. And so we’re talking about stores embracing digital surfaces. Could be a digital cooler, a transparent LED in a cooler, it could be an endcap that’s got a digital screen embedded. It could be shelf strips that are interactive, that drive attention, gaze, and engagement at the point of sale.

These are always in which stores are embracing and investing in turning the store into an advertising medium. And what does that mean? This is what we say when we say it’s an advertising medium. We know the internet is an advertising medium. We know that broadcast TV is an advertising medium. We know print is a less meaningful, but still an advertising medium. We know that billboards on the side of highways are an advertising medium. We’re now at a point where the store itself is an advertising channel.

So when the big brands like Unilever, Coca-Cola, PepsiCo, etc. are making decisions on which channel they invest in, now the store is a legitimate channel to invest in because it is where the consumer makes decisions, it is where they can be impacted, is where the brand can deliver a brand narrative. These are all really valuable. There’s not a brand on the planet that doesn’t want a more intimate engagement with their consumer, which is exactly what the “Store as a Medium” is.

And so we’re relieving from the stores the responsibility of investing in the infrastructure. Instead, that gets invested in by parties that are interested in building up the channel, and then it becomes viable as a media channel. So that brand manager who’s responsible for a significant budget makes a decision about whether or not this specific campaign gets delivered through print, through out-of-home (outdoor), through store, through the internet, or maybe all of them. And that’s the big sea change.

Christina Cardoza: I love hearing about all of these physical digital transformations, like you mentioned: the digital shelves or digital signage, digital coolers. But I’m curious, as a lot of retailers have started on their digital transformation journeys, omnichannel has been a big focus for them, and that’s sort of blending their online storefronts with their physical storefronts. So how does “Store as a Medium” fit into that retail omnichannel experience?

Jay Hutton: Well, we all knew that the game would change when we were able to measure audiences. So, as you frequently do in the evolution of technology, you’re kind of waiting for the technology, right? You’re waiting for the technology to keep up or catch up to the demands of the marketplace. And Intel® among others have proven leadership in delivering high, powerful, high-capacity, powerful processors at the edge. So now we’ve got the ability to draw inferences, computer vision, looking at audiences and deriving meaningful data. How many men, how many women, how many 25-year-olds, how many 35? Not privacy data, not data that would make any of us feel creepy, but data that is relevant to a brand.

So we all knew that once we cracked the code of that, that it would open up the store as a valuable medium and now realistically become among the channels that are represented by this phrase, “omnichannel.” It wasn’t before and now it is. And now we’ve got this opportunity to drive really meaningful insights—what brands would call the data dividend. Not only are they interested in delivering advertising at the point of sale, but they’re interested in lift. They want to sell more stuff and they’re interested in this unbelievably complex and robust data set that they’ve never had before. And they’ve really evolved. All brands on the planet have evolved to a point where data—knowing more about their customer—allows them to segment, laser focus, and understand their customer engagement much more acutely than ever before.

Christina Cardoza: That’s a great point. Being able to get those instant customer behaviors in real time can allow brands to sort of change messaging on the fly. But I can imagine it can also provide personal services for the consumer looking at that signage or in that digital shelf cooler. So, what—can you talk about some of the benefits that the customers get over this too?

Jay Hutton: Sure. So we talked about brands to get lift to get more data and consumer engagement. The brands begin to have a direct and meaningful dialogue with the customer. In a world where there is no consent, no consent is secured from the customer. We’ve got a bunch of very focused marketing that can be delivered to the customer as if that customer is a member of a group, a gender group, an age group, whatever.

But in a world where we’re getting consent, maybe we’re aligning a loyalty app with what we’re doing on the digital display. Now we’ve got a customer that’s consenting to get personalized advertising, and that’s meaningful to a customer. That’s what’s in it for the customer. Now it’s not just general broadcast, shotgun advertising; now it’s laser focused. Jay likes Coca-Cola more than he likes Pepsi, so I’m going to drive coupons, digital coupons, or I’m going to drive campaign promotion to him specifically because of his brand affiliation and because of his brand interests.

This is really the first time we’ve begun to drive consent-based advertising in a way that consumers value. I don’t care about stuff I don’t ever buy; I’m not influenceable necessarily at the point of sale. But if I get choices on brands that I’ve already made, have a predilection or a preference for, then that’s more meaningful to me as a consumer.

Christina Cardoza: So how does the company VSBLTY and retail stores and brands—how do you actually make this happen? What are the components? You mentioned digital cooler, shelves, signage. What are the technology components that these stores and brands really need to have to make this all possible?

Jay Hutton: That is perhaps the most significantly complex part of the business model, that took a couple or three years to figure out, and this is why we work with WPP and Intel and others that are stakeholders in this overall problem and have figured out some of the components.

So let’s talk about what’s meaningful to a retailer. Retailers function their business on a 3% to 4% gross margin. Like it’s a very, very thin margin. So, what is the probability that a retailer is interested in a multimillion-dollar capital infrastructure for digital overlay? But what’s the probability? Almost zero unless you’re Target, Walmart—some of the big players who really understand media and take a very sophisticated approach to media. Unless you’re in that 1% of 1% of retailers, you’re not interested in doing all the heavy lifting associated with the digital transformation.

So then the hypothesis was if a group of us called the “Store as a Medium” consortium could get together and solve those problems on behalf of retailers, therefore creating a media infrastructure, capitalizing it, deploying it, managing it, even doing brand-demand creation for the media network, it seems to me that that would satisfy all the requirements and therefore it simplifies their value proposition to a retailer by saying, “You don’t have to do anything. We’ll open up the doors. Let’s have an agreement to do this together over three, four, five years. Let’s do it at scale—5,000 stores, 10,000 stores—and together we’ll create this channel.”

That is the evolution of the value proposition that we’ve created over the last several years. And, of course, it’s based upon a foundation of mistakes and learnings and evolution of thought. And we’re really at a point now where we’ve got a really unique offering amongst the group of companies, and an opportunity to really lead this category—not only with a practical application, but with the thought leadership that this requires at the moment.

Christina Cardoza: So does that “Store as a Medium” collaborative effort exist today beyond VSBLTY?

Jay Hutton: Sure. I don’t know beyond VSBLTY—we’re certainly among the players that are part of driving it. And so, do others understand this? Of course. Boston Consulting Group said this—says this is a $100 billion market by 2025, and it’s under $5 billion today. Even if that statement is hyperbolic, we know it’s exploding. There’s every indication that it’s exploding. So this is no longer a whiteboard exercise. It was, “We’re doing this now.”

Our largest deployment with Intel is in Latin America, where together with Anheuser-Busch, who are, interestingly, Christina, both a CPG and a bricks-and-mortar retailer in Latin America. So they own physical bricks and mortar. We really couldn’t find a better partner than them, because they actually speak both glossaries. They have the vernacular of a retailer, and they have the vernacular of a CP—of a consumer packaged—goods brand. So, together with them, we’re building a network of stores which will reach 50,000 stores by the end of year four. And were we to reach that objective, and I firmly believe we will, it’ll be the largest deployment of a retail-media network on the planet. And I think we’ll represent a leadership position with respect to growing this. And remember, we’re doing this in Latin America, where it’s not modern trade; this is traditional trade. This is a 10-square-meter convenience store on the side of a dirt road in Guadalajara. If we can do it there, it gives us a leg up on doing it in places where it’s got a less challenging environment.

So we are leading, and here in the US we’ve signed together—the consortium—we’re working together to deploy a 2,800 location fuel and convenience. We’re also working on traditional c-store, which by the way is probably going to be one of the early adopters of this category, because they don’t have the complexity of 110,000 skews that a large grocery might have. They might have 6,000 skews, where it’s just manageable; it’s more bite size. And so there’s an opportunity to do it there, and we’re delighted by the leadership we’re getting from Intel and others as we drive this idea and mandate.

Christina Cardoza: I’m very familiar with Anheuser-Busch brands—not too familiar—they’re beer brands. So I’m very interested because I didn’t know they were a physical store either. I want to come back to that and hear more about how you worked with them. But, first, going back to the retail stores where you have brands making these technology investments and bringing those into the store, are they leveraging any of the retail existing technical infrastructure? Or when they bring in these components is it brand new?

Jay Hutton: Everyone has the fantasy that existing infrastructure can be leveraged, but generally speaking we discover that is not the case. The Wi-Fi supplied in a Target or a Kohl’s or Walmart usually sucks. And we would have to deploy on top of that in order to get the dedicated access to bandwidth we would need. Now we’re edge, so we don’t have a disaster-recovery problem if the network goes away, but if you’re driving new content—to your point made earlier, adjusting content and creative on the fly—well then you need internet access to do that. And if we’re functioning over an in-store Wi-Fi that’s got consumers on it and the point of sale on it, it’s not workable.

So we have that fantasy that we’d be able to do that and therefore lower the cost of the total capital expenditure. But we no longer have that fantasy. Camera and network obviously exists for the purposes of loss prevention in retail, but, generally speaking, Christina, they’re up in a 30-meter ceiling or a 25-foot ceiling, and they’re looking down on heads, not direct on faces. And so when we deploy our technology, we generally deploy it with camera. And, again, we’re not picking up privacy data. We’re only picking up demographic data, which of course is useful to the brand to understand the overall, macro buying behavior, which of course is—that’s the yield, that’s the data dividend we spoke about earlier.

So for the most part this is new build, but new build, I should hasten to add, that we’re removing the capital-expense responsibility for from the retailers. So if they deliver us a number of stores that is large enough, we’ll go and assemble the capital necessary to make it happen. And I think that’s probably the most important part of building this consortium—to have a legitimate group that’s got—and everybody playing their part—have got the ability to deliver these kinds of networks on scale.

Christina Cardoza: So let’s go back to the Anheuser-Busch example, or any other customer use cases that you have. What were the challenges that they were facing? Why did they come to VSBLTY? And what was the result of your partnership with them?

Jay Hutton: Well, well if you scan the globe from the American perspective—this is difficult to understand—but to start with what problem we are solving: in America, 65% to 75% of overall retail fulfillment. So, the entire commerce landscape is fulfilled by big box—Walmart, Target, Kroger. In the rest of the world, with the exception of Western Europe, it is fulfilled by traditional trade—what you might call mom and pops. So it’s completely reversed in the rest of the world. We know modern trade has a really, a good capacity and very good sophistication as it relates to the deployment of technology. But mom and pops—we knew that if we could solve that problem with the assistance of folks like AB InBev—Anheuser-Busch—that we would have a global runway, we’d have green fields that would extend to a global landscape. And really, that was the challenge.

So what’s the problem? The problem is there’s virtually no technology adoption in mom and pops in traditional trade. There’s not even point of sale in traditional trade, Christina. So there’s very little visibility, if you’ll allow me the pun. There’s very little visibility to what’s happening in traditional trade. So the deployment of camera technology initially satisfies the requirement of doing screen-based advertising—Corona, Heineken, you can imagine. But now I’ve also got a virtual window into the retail, which means that I can layer on other capabilities—planogram compliance, fraud compliance, POS.

So we’re just at the beginning of this technology adoption, which started with a revenue-generating platform called “Store as a Medium.” There’s other things we can do, all part of this remote-execution mandate, which is really critical for traditional trade. But we’re excited by the fact that we start with a revenue-generating model. And to have AB InBev—Anheuser-Busch—as a side-by-side partner for us allows us to tell that story with just considerably more legitimacy. We’ve got the ability to do this and deliver it. Right now we’re just over 2,000 stores in at about eight months. So, pretty good deployment so far. We’re going to accelerate it, but we’re happy with where we are at the moment.

Christina Cardoza: Great. And we’ve been talking about beverage stores and grocery stores, but I can also see a use case for this in other retail stores. I’m thinking like a cosmetic store—helping workers. I know when I go into a store, I want to know more about a product. Or you learn more about what’s going to be the best for my particular features. The workers are sometimes caught up, or I don’t normally want to go up to a human person and speak to them. So I can see digital signage, or some of these solutions giving you a lot more information and freeing up employees from doing other important tasks.

Jay Hutton: If there’s one category, if there’s one brand category, that can afford the investment in the digital infrastructure, it’s health and beauty. The margins are out of this world. There’s a technical sell, right? Because it’s not just pasta. And there’s a labor problem right now to getting skilled labor to be able to perform that role at the point of sale. So there’s an adoption happening in health and beauty that’s happening, that’s outpacing everything else, because it does have that ROI. And if there’s a place where the brands themselves will underwrite the cost of the digital infrastructure, it’s in health and beauty. Because there’s an opportunity there in a marketplace that has really kind of ridiculous gross margins; where they can invest and the ROI is almost immediate; and there’s an education issue right at the point of sale. You want to educate at the point of sale.

So, look for health and beauty to be probably a brand leader in the category. This doesn’t necessarily run in contrast to a grocery deployment or a big box deployment, because health and beauty can be co-resident—they can do it together. But we’re going to see one of the brands that’s leading the way will be health and beauty. One of the brand categories that leads the way is health and beauty.

Christina Cardoza: I can definitely see that. You’ve mentioned that we have computer vision models running, making all of this possible, gaining that customer analytics and behaviors, and they’re analyzing that data at the edge to make it fast and make it real time. And I know these are big areas for Intel—and I should mention the IoT chat and insight.tech as a whole, we are sponsored by Intel. So I’m curious how your work with Intel has made “Stores as a Medium” possible, and all these initiatives that you’re bringing to customers possible.

Jay Hutton: Intel has enormous global reach. If we’re having a particularly difficult time reaching the C-suite of a retailer, Intel can get there because they have a team dedicated to ensuring thought leadership. They’re not necessarily a company that’s—of course, at the end of the day, Intel wants to move silicon. But you would be surprised, or one may be surprised, as to the level of expertise—subject matter; narrow, specific vertical expertise—that Intel develops, and we lean on them all the time.

And of course the legitimacy they give to us—not only in domestic engagements but internationally as well—helps us enormously. When we can say we’re a side-by-side partner with Intel, and proud to be the 2022 Intel Channel Partner of the Year, VSBLTY is—it gives us a letter, a degree of legitimacy that gets us into the conversation. So—and also Intel has a track record of putting their money where their mouth is—when it comes time to really manifestly drive that thought leadership in a trade event or a speaking event or a published document, Intel will always be there with us, assisting us wherever we need that assistance, and we’re enormously gratified to be in that position.

Last thing I should mention, just occurred to me is, as I was making my last remarks, on the technical side, the silicon is evolving, right? And today we’re in a certain family of silicon that drives our solutions, and we can already see the next layer of silicon coming and we get early access to that. So by the time it’s available in production, we have a product that can run on it, which to us is an enormous advantage from the competitive point of view. We all have competitors, and for us to be able to run on the chip set that was released, like, last month, we’re already able to run on it, and a production variant is a huge advantage for us.

Christina Cardoza: I’m glad you brought up those technology advancements, because it sounds like we may be in the early phases or at the beginning of bringing these technologies and these transformations in the physical store, and technology is only going to get better. So, what do you think we can expect from this space, or what will happen to make “Store as a Medium” truly a reality?

Jay Hutton: So, it’s no longer conjecture, it’s no longer guesswork. We were in the guesswork/conjecture category in 2015, 2016, but there’s been enough evidence that this model works that we’re now looking at large-scale deployments. If you just look at Amazon and Walmart, between the two of them—and I may get my numbers slightly wrong—but $2 billion each in advertising revenue that was not there in the previous year. So, if you’re ever doubting the veracity of this category, just look to that. And really others are going to follow that, because if you’re not afraid of what Amazon and Walmart are doing and you’re in retail, then you’re just not paying attention, right? So they’re leading the charge with respect to that.

And I think that this, as I said earlier, there’s no longer any guesswork about whether or not this category will take off. The challenge now is the speed. And you’ve heard this a hundred times in technology in your career probably, but it’s a land grab at the moment. It’s getting contracted retailers to sort of do the dance with you and commit to you long term. And that’s going to be the difference between the leaders and the also-rans in this category. It’s the speed with which adoption can be secured, deployment can be secured, and revenue can start to happen.

Christina Cardoza: Well this has been a great conversation, Jay. I’m excited to see some of the technology come to my own local stores. But, before we go, are there any key takeaways or final thoughts you want to leave our listeners with today?

Jay Hutton: Well, just to strap in, because your retail experience is about to change. It’s going to become more experiential; there’s going to be more for you for the customer journey. And if you decide to opt in to some kind of loyalty program, it will become profoundly more personalized to you. And that experience will extend to your home, where you’ll be able to engage with brand from the comfort of your home, if you wish to. And that whole customer journey, that whole engagement modality begins at bricks and mortar, and it’s unable, it cannot begin in an online experience. So, what we’re now able to offer you in an offline world is the thing that we only fantasized about offering just three or four years ago. So, the entire experience will change, and retail is not going anywhere. Bricks and mortar are not going anywhere.

Christina Cardoza: Great final point. And with that, Jay, I just want to thank you for joining us today on the podcast.

Jay Hutton: Thank you, Christina. My pleasure.

Christina Cardoza: And thanks to our listeners for tuning in. If you liked this episode, please like, subscribe, rate, review, all of the above on your favorite streaming platform. And until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

The Key to Successful IoT Projects: Edge Computing

IoT can be groundbreaking technology. By harnessing data from machines and acting on related insights, companies can fine-tune business operations. And with machines no longer in a black box, they can alert and issue warnings when processes go awry.

But the devil’s in the details. IoT might promise radical operational efficiencies, but too often companies don’t realize that they need a robust infrastructure framework for the technology to really do its job. When built on a shaky foundation, IoT projects collapse—or stall.

IoT Challenges: Why Projects Flounder

Failed IoT projects come in all shapes and sizes. Sometimes companies neglect to factor in the cost of data transfer and computing in the cloud over a product’s entire lifespan, according to Rodney Hess, Technical Architect and Development Lead at Beechwoods Software, an embedded services and solutions provider. “By the time they realize that they need to change their service model and find a way to pay for it, they get backlash from their customer base,” he explains.

Being hamstrung by decisions not fully thought out is not the only challenge enterprises face. Data from machines is valuable currency since it is an indicator of machine health. But the dizzying number of formats in which data travels also complicates matters as data that can’t be read and mined for value is just plain useless.

In addition, Hess points out, “we’re in a world where every week we have a shiny new security patch that needs to be applied to systems.” When the lifespan of a project can run up to 20 years or more, the costs of such security firefighting add up quickly. Companies are justifiably terrified of leaving legacy systems and protocols vulnerable to security challenges.

Last, machine learning programs are sometimes a one-trick pony and “start conflicting with evolving requirements and needs,” Hess says. “If solutions can’t be easily updated or changed, suddenly you have hardware that becomes obsolete really fast,” he adds.

As a result of these many challenges, companies opt for the “safe” option and drop or stall IoT projects altogether.

But it needn’t be this way, says Mike Daulerio, Vice President Marketing and Business Development at Beechwoods. Edge computing is fast emerging as a potent solution to these various IoT-related data challenges. 

The Benefits of Edge Computing to IoT Projects

As enterprises grapple with the high costs and latency of transferring IoT-generated data to the cloud, they are giving edge computing a closer look. “There’s a dissonance where companies have a lot of data and want to get it to the cloud, but it’s too expensive. It’s just not feasible,” Hess says. Relying too heavily on the cloud also endangers business continuity, he says. “What do you do when your Internet access goes down? Suddenly business logic stops working, you’re sitting on data and not getting anything out of it. That’s a big problem,” he explains.

Edge computing solves this IoT challenge by bringing the computing closer to the source of data—the edge. Doing so “helps reduce the messaging costs of getting data to the cloud,” Hess says, and in doing so, makes IoT computing scalable. Instead of spending time and money ferrying data back and forth, computing and insights happen closer to the source of action.

As enterprises grapple with the high costs and latency of transferring #IoT-generated data to the cloud, they are giving #EdgeComputing a closer look. @BeechSoft via @insightdottech

While edge computing is not a new concept, advances in microprocessors have improved its utility and accelerated its adoption, Hess says. “Embedded processors have crossed the threshold where they’re now capable of running machine learning algorithms, so you don’t need a room full of servers to crunch these algorithms,” he says.

Another advance is that the machine learning algorithms “have been refining themselves to the point where they’re more effective at getting the answers to the problems we’re looking to solve,” Hess explains. 

Overcoming IoT Project Challenges

Beechwoods offers the edge computing platform EOS, which is based on EdgeX Foundry, an open-source framework that helps interoperability between IoT devices and applications. EOS aims to address several IoT-related challenges that customers face, according to Hess. For one thing, it provides a protocol gateway so different types of data from legacy and modern machines can talk to one another.

The platform also verifies identity through secure APIs so only authorized devices and people can access the data. Enterprises can run different sets of machine learning analytics programs to meet evolving needs.

In addition to providing software, Beechwoods provides system integration services so IoT projects can reboot after lurching stops-and-starts.

For example, Beechwoods delivered its EOS edge IoT solution to a startup that was developing smart locker appliances installed in exterior walls of homes and offices. The company needed connectivity components, camera sensors, and other control systems to truly make the product smart.

“We helped them take their idea for a smart locker and turned it into a proof of concept. With EOS as the tech platform, it was a straightforward path from product concept to demonstrable prototype,” Daulerio says.

Beechwoods leverages the Intel® Distribution of OpenVINO Toolkit for its EOS platforms and learning from new developments in the open standards front. “Intel provides us with some of the best-performing codes for video analytics and helps us build the best models for machine learning,” Hess says. “We can achieve the best results we can on our embedded processors because of the work Intel has done in this area.”

In addition, Hess is grateful that Intel is an active proponent of open standards within EdgeX Foundry, which Beechwoods has folded into its EOS offering. 

A Truly Smart Future

With IoT and edge computing rapidly gaining ground, expect the future to be truly smart, Hess says. “Because of the ability to have a lot of embedded devices all around us running algorithms, we will have an environment that is truly smart and responsive and intuitive, that addresses our needs and concerns right away,” Hess says.

These environments could be a smart home, or a factory floor tuned to occupational safety standards and continually routing guidance to workers about unsafe areas. Dynamic operational changes need dynamic smarts. With the help of IoT and edge computing, it’s where the future is headed, Hess explains.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Moving Machine Vision from Inspection to Prevention

Fifty percent of a modern car’s material volume is plastic. And the vast majority of that—from oil pans to bumpers to dashboards—are fabricated through a process called plastic injection molding.

Just as it sounds, plastic injection molding machinery inserts molten plastic into a rigid mold, where it is allowed to set. The setting process can take anywhere from hours to days. Quality checks usually happen at the end of the production line, where inspectors manually deconstruct samples from each batch to look for defects.

“They’re taking two or more parts per shift off the line, destructively testing them, and making a call on whether the parts that were produced that shift were good or bad,” explains Scott Everett, Co-Founder and CEO of machine vision solutions company Eigen Innovations. “It takes basically an entire day just to get through a couple of tests because they’re so labor-intensive and then you only end up with measurements for two out of thousands of products.”

At first glance, this seems like an application where machine vision cameras could make quick work of an outdated practice. But while the concepts behind plastic injection molding are relatively simple, it’s a complex process. For example, injection molds are susceptible to physical variations in raw materials, temperature and humidity changes in the production environment, and slight operational inconsistencies in the manufacturing equipment itself.

The goal isn’t just to identify that a part is defective, but to provide useful quality analytics about the root cause of defects before hours of bad parts are produced. By monitoring every fabricated part, you can start to predict when the process is at risk of producing defective batches. But the number of variables in play makes this difficult for machine vision cameras unless the information they produce can be contextualized and then analyzed in real time using visual AI.

Beyond the Lens of Machine Vision Quality Inspection

Like all applications of visual AI, developing an ML video analysis algorithm starts with capturing data, labeling it, and training a model. On the plus side, there’s no shortage of vision and process data available during the production of complex parts. On the downside, the mountain of data that’s generated can contribute to the problem of identifying what exactly is causing a manufacturing defect in the first place.

Therefore, an ML video analysis solution used for predictive analytics in complex manufacturing environments must normalize variables as much as possible. This means visual AI algorithms need information about the desired product outcome as well as the operating characteristics of manufacturing equipment, which would provide a reference from which to analyze parts for defects and anomalies.

Eigen Innovations’ industrial software platform captures both raw image data from thermal and optical cameras and processes data from PLCs connected to fabrication machines. This data is combined to create traceable, virtual profiles of the part being fabricated.

@EigenInnovation’s machine vision #software platform is already paying dividends at major automotive #manufacturers and suppliers worldwide where it’s saving time, cost, and reducing waste. via @insightdottech

Then, during the manufacturing process, AI models are generated based on these profiles and used to inspect parts for defects caused by certain conditions. But because the platform is connected to the manufacturing equipment’s control system, visual inferences can be correlated with operating conditions like the speed or temperature of machinery that may be causing the defects in the first place.

“We can correlate the variations we see in the image around quality to the processing conditions that are happening on the machine,” Everett says. “That gives us the predictive capacity to say, ‘Hey, we’re starting to see a trend that is going to lead to warp, so you need to adjust your coolant temperature or the temperature of your material.’”

Inside the Eye of Predictive Machine Vision

While Eigen’s industrial software platform is an edge-to-cloud solution, it relies heavily on endpoint data so most of the initial inferencing and analysis occurs in an industrial gateway computing device on the factory floor.

The industrial gateway aggregates image and process data before pushing it to an interactive edge human-machine interface, which issues real-time alerts and lets quality engineers label data and events so algorithms can be optimized over time. The gateway also routes data to the cloud environment for further monitoring, analysis, and model training (Figure 1).

Diagram showing how Eigen’s machine vision platform sends data for edge and cloud insights
Figure 1. The Eigen Innovations machine vision platform is an edge-to-cloud predictive analytics solution for complex manufacturing environments. (Source: Eigen Innovations)

Eigen’s machine vision software platform integrates these components and ties in industry-standard cameras and PLCs using open APIs. But the key to allowing AI algorithms and their data to flow across all this infrastructure is the Intel® Distribution of OpenVINO toolkit, a software suite that optimizes models created in various development frameworks for execution on a variety of hardware types in edge, fog, or cloud environments.

“From day one we’ve deployed edge devices using Intel chipsets and that’s where we leverage OpenVINO for performance boosts and flexibility. That’s the workhorse of capturing the data, running the models, and pushing everything up to our cloud platform,” Everett says. “We don’t have to worry about performance anymore because OpenVINO handles the portability of models across chipsets.”

“That gives us the capacity to do really long-range analysis on hundreds of thousands of parts and create models off of those types of trends,” he adds.

The Good, the Bad, and the Scrapped

Eigen Innovations’ machine vision software platform is already paying dividends in manufacturing environments at major automotive manufacturers and suppliers worldwide where it’s saving time, cost, and reducing waste.

Rather than producing batches of injection-molded car parts only to discover later that they don’t meet quality standards, Eigen customers are alerted of anomalies during the fabrication process and can take action to prevent defective parts from being created. And it eliminates the time and material scrapped during destructive quality testing.

“Our typical payback per machine can be hundreds of thousands, if not millions, of dollars on really large machines where downtime and the cost of quality stacks up very quickly,” Everett says. “And it’s as much about providing certainty of every good part as it is detecting the bad parts.”

“We’re approaching a world where shipping a part with insufficient data to prove that it’s good is really just as bad as shipping a bad part because of the risk factor,” he adds.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Smart and Sustainable Buildings: With Johnson Controls

Did you know that commercial and industrial buildings contribute to almost half of the world’s carbon footprint? When you think about it like that, you can see why more and more businesses are committing to aggressive sustainability goals. And the best way to achieve net zero or carbon neutrality is by creating smart, connected, and sustainable buildings—especially in today’s hybrid work environment.

For example, if only half your workforce comes to the office at any given time, it doesn’t make economic sense to keep all the lights on all the time. And it’s more than just lights. It’s HVAC systems, air quality, security systems, and power supply that all play a role—from office spaces to bathrooms, cafeterias, and even parking lots.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

In this podcast we look at what a smart building means, technologies that make a building “smart,” and the role buildings play in larger sustainability efforts.

Our Guests: Johnson Controls and Intel®

Our guests this episode are Graeme Jarvis, Vice President of Digital Solutions at Johnson Controls, a smart building solutions provider; and Sunita Shenoy, Senior Director of Technology Products within the Intel®’ Network and Edge Computing Group.

Graeme has been with Johnson Controls for more than eight years, focused on helping customers transform building spaces with sustainable, safe, and secure experiences.

Sunita has been with Intel for more than 16 years and has deep experience in delivering technology products that help create innovation for an ecosystem of partners in multiple verticals such as mobile, education, enterprise, automotive, and industrial.

Podcast Topics

Graeme and Sunita answer our questions about:

  • (3:19) The challenge of today’s hybrid work environment
  • (7:25) Energy use and sustainability challenges in buildings
  • (9:31) Efforts that lead to buildings becoming more energy efficient
  • (13:42) How technology drives smart and sustainable buildings
  • (16:59) How to leverage existing IT and connect disparate systems
  • (22:24) Smart and sustainable building use cases and examples
  • (24:45) Intel-led sustainability efforts and goals
  • (26:55) The power of partnerships

Related Content

To learn more about smart buildings, read The Future of Smart Buildings? Net Zero. For the latest innovations from Johnson Controls, follow them on Twitter at @johnsoncontrols and LinkedIn.

This podcast was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Transcript

Christina Cardoza: Hello, and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Associate Editorial Director of insight.tech. And today we’re talking about smart and sustainable buildings with Graeme Jarvis from Johnson Controls and Sunita Shenoy from Intel®. But before we jump into the conversation, let’s get to know our guests a little bit. Graeme, I’ll start with you. Welcome to the show.

Graeme Jarvis: Thank you, Christina. And pleased to be with you again, Sunita. My name is Graeme Jarvis and I’m Vice President within Johnson Controls’ global digital solutions business, where we are all about helping our customers realize their smart, sustainable business objectives within their built environment. My role is commercial-leadership focused. And so I engage with our large-enterprise customers globally and key-partner enablement programs, with Intel being a great example.

Christina Cardoza: Great to have you, Graeme. And I should mention before we get started that the IoT Chat and inside.tech as a whole are sponsored by Intel, so it’s always great when we can have somebody from the company represent and contribute to the conversation at hand. So, Sunita, welcome to the show. What can you tell us about yourself and your role at Intel?

Sunita Shenoy: Yeah. Thank you, Christina. And it’s my honor to be on this chat with Graeme and yourself. I’m Sunita Shenoy, I’m the Senior Director of Technology Products in Intel’s network and edge computing group. My organization in particular is responsible for technology products, requirements, and roadmaps as it relates to the industrial sector. Industrial sectors include manufacturing, utilities—buildings is an infrastructure within that sector from an energy-consumption perspective, as well as housing these critical infrastructures. So, my organization’s responsible for bringing the silicon products as well as the software technologies required to enable this transformation into smart infrastructures.

Christina Cardoza: Great to have you, Sunita, and I love how you both mentioned this idea of meeting business objectives within the environment, talking about buildings, which is the topic we’re talking about today, smart buildings. So, obviously remote work has taken a huge force in the last couple of years and it’s here to stay, but people have started to return back to work and started to return back to the office in this hybrid work environment, where not everybody is in the office all the time and not as many people are coming back to the office. So that gives a little bit of a challenge for businesses to figure out how to make the best utilization of this space. And you mentioned it, it has great energy consequences with it too. Do we need to be having all the lights on and all of these things operating in a building that’s empty at times, half full, and not to capacity. So, Graeme, I’ll start with you. What are the challenges businesses have to think about now in regard to their physical office space as people start returning to work in this hybrid environment?

Graeme Jarvis: Sure. It’s a great question, Christina. And it’s so relevant right now. COVID actually served as a catalyst for what we’re now going through, which is the new normal. So, what does hybrid work environment mean? I think there are two key components to that. One is around the people, be they employees, guests, building owners. And the other is around the built environment itself and how the built environment needs to adapt to the new normal, which is really as, as we see it, around sustainability, workplace experience, and then safety and security within the built environment.

So, before COVID-19 I think we’re all familiar with the fact that most of us worked at an office almost every day, and the pandemic proved that we can actually be productive from our home office or on the road, whether our home office is nearby, within the same country, or even abroad. That has been proven. So now the challenge is on the employee side for a hybrid work environment—what would that mean to me? I would like it to be appealing. I’d like it to be easy to go in and out of work. And so how one might—how might one do that? And it gets into key enabling technologies, touchless technologies, and having a sense of control over that.

We happen to have a solution called OpenBlue Companion, which is an app that allows employees and guests to do hot desking, book conference rooms, pretreat those conference rooms based upon the number of people that might be in there for a particular meeting. There’s cafeteria integration, parking and transportation integration, so that when one goes to the office it’s actually a pleasant experience. On the building side, the hybrid work environment is really financial. How do I optimize the footprint I have, and what am I going to need moving forward to support my employee team? And that’s where we are right now, is companies are trying to rationalize what they have and what they will need.

So, some of our solutions enabled by compute and AI from Intel, for example—we are able to understand what is in motion today, and give an assessment for a client around what they have and the efficiency of those solutions today based upon the outcomes they’re trying to realize. Then they have an objective. They would like to be more productive. They would like to reduce expenses. They would like to have a safe, sustainable workplace. So now you’ve got interdependencies around the heating, ventilation, air conditioning system, the number of people that happen to be in a building through access-control information—the time that cafeteria should actually be preparing food based upon the workload of people that are in the building. And all of this is interconnected now. And so there’s an optimization routine that starts to present for management around: What should my environment look like? How should it look in the future? And what we’re seeing today is a template for the building of the future. People are rationalizing and optimizing on what they have, and they’re taking lessons learned and starting to apply it for their “building of the future” —be it stadiums, be it ports and airports, be it traditional office space.

Christina Cardoza: Yeah. You bring up a lot of things to consider when going back to work. And I want to come back to OpenBlue and how you actually make these buildings more energy efficient. But, Sunita, I’m wondering, from an Intel perspective, what are the implications you’re seeing of a hybrid work environment as it relates to the business and energy usage?

Sunita Shenoy: Yeah. So, from, as Graeme was stating, and the working from home became the new norm in the last three years. But as all companies, all businesses are easing their workers back to work—be it hybrid work or remote work or on site—they have to make it comfortable for the workers coming in by having frictionless access, right? You don’t walk in and open the doors because now you need it to be safe from bugs, right? So you make it frictionless. You use advanced technologies like wireless connectivity, you use advanced technology like AI to make it easy for improving the quality of your workspace, whether it’s your hybrid desk or whether—how you find your rooms, or whether the building, if the building is retail, for example, how do you find your way around without being in a crowded environment, right?

So, making it easy to use data and AI and technology such as wireless and mobile for workers to ease into the workplace because they sort of got comfortable being in their own spaces, right? In fact, a lot of the stories I’ve heard is, okay, my office at home is more comfortable than my office at work. So how do I make my environment at work as comfortable and safe for them as it is in their home? So that’s really the implication, and technology can play a big role in implementing these solutions. But deployment is one of the key areas that we need to focus on, is how do we make it easily deployable using solutions like Johnson Controls solutions with our technologies?

Christina Cardoza: Absolutely. And it comes to my attention that there may be even larger implications. Say if you have a building where there’s multiple different businesses—it’s not your business that owns the building. And I think that brings up the question of who’s in charge of making a building smart or reducing the energy consumption. And is it the building owners, or is it multiple businesses within the building? So, Graeme, can you talk a little bit about how buildings can become more energy efficient, and who’s really in control: businesses or building owners?

Graeme Jarvis: I would start off by saying most businesses have an ESG—or environmental, social, and governance—plan or a set of objectives. Johnson Controls does. I know Intel does. And these are used as a means to communicate value-based management practices and social engagement to key stakeholders. So, employees, investors, customers, and potential employees also. We at Johnson Controls, we’ve adopted a science-based target and net zero carbon pledge, to support a healthy, more sustainable planet over the next two decades. So our efforts align with the UN sustainable development goals, and to date since 2017, where we indexed, we’ve reduced our energy intensity by about 5.5%, and our greenhouse gas emissions by just over 26%. And we have a plan to get to carbon neutral as part of our 2025 objectives, realizing that that carbon neutral state will take longer, but that is part of our ESG plan.

So the reason I mention that is once you get into the built environment, somebody owns that building and they’re going to have something to do with a sustainability footprint objective because, one, it’s the right thing to do. But, two, the economics are motivating businesses to act because you can be more efficient, thereby saving money. So how would one do that? We help in that regard because buildings account for about 40% of the planet’s carbon footprint. So if we want to go and start talking about how to solve sustainability challenges, the building, the built environment is top of mind. It’s close to the top in every study.

So, once you’re in, you’ve got certain equipment that’s running: heating, ventilation, and air conditioning systems. You have multiple tenants within that building. They all typically pay a fee for the energy consumption for the space they use, but it’s relatively binary, or it’s a step function based upon historical patterns. But what if you could give them insight to what their real usage could be based upon seasonality factors, how many people are in the building, when they’re in the building, when should I treat the air because I’ve got a meeting room that’s booked, and you give them control.

And some of our solutions through OpenBlue help enable clients to understand what is actually going on in their environment and where are areas that they can improve. As soon as that data becomes available and there’s a financial consequence or a financial reward, then behavior starts to change. And that’s where it comes back to, how do you enable against that behavior that you want? And then you get into the hardware, the software, the compute and AI that Johnson Controls can help with and Intel can help with. But it really starts with that ESG charge. And the fact that buildings are a large opportunity from a sustainability-improvement standpoint.

Christina Cardoza: When you think about how much energy and carbon emissions buildings give off, like you just mentioned Graeme, about 40% of the carbon emissions, I can see why businesses are setting such aggressive sustainability goals to reduce that. And, Sunita, you mentioned to be able to tackle this problem and make a dent, you really need to deploy the right technology to get the data points from all of those different systems that Graeme was talking about. So, can you talk a little bit more about the technology that goes into these businesses, making a dent towards those efforts?

Sunita Shenoy: Yeah. Yeah. I think Graeme touched upon some of these, right? So it’s not just now because of hybrid and pandemic that we are realizing this, but this is a known fact, right? That the carbon footprint is generated—emissions are 40% to 50%, I don’t know what the actual numbers are—but commercial and industrial buildings contribute to a vast majority of the carbon emissions, right? So it is our corporate social responsibility whether you’re a building owner or a business owner. It is our responsibility to reduce that carbon footprint, right? So the technologies that you can use and we have used is, one, is AI is becoming more advanced through the advancement of sensors, right? How to collect data, to how do you bring this data into a compute environment where you apply AI to learn from and analyze this data to infer that information?

So we can be a more—automate the whole process. For example, in the past the building managers in multiple buildings—I mean, I’ve interviewed several across the industry, the facilities manager or building manager, what they would do is they would use manual processes where 8:00 to 5:00 you keep the HVAC running or keep the lights running, regardless of how the building is utilized, right? And that generated X amount of energy consumption in the buildings, right?

But once IoT became a reality over the last seven, eight years or so, we started to put sensors in there; to use daylight savings; we automated the process of using AI to see the utilization of the building. And based on the utilization, you would turn the lights on or off or HVAC on or off or water consumption—whatever it is, right? And that reduced the amount of energy used in the buildings, right?

So, small steps first, right? First, connect the unconnected, assess the data in the buildings—which is a treasure trove of data there—analyze where you can drive the energy-consumption optimization. The first place to start is lighting or HVAC. Then you go on to the other consumptions as your computers that are plugged in—or it could be your water utilities—collect all the data and start analyzing it and start optimizing where you want to start reducing the energy optimization. So it’s not just about today and pandemic and hybrid. This has always been the process ever since IoT became a reality, and AI and advanced technologies became a reality. It is very feasible. And we at Intel Corporate Services have already accomplished a huge task in reducing our carbon emissions.

Christina Cardoza: I can definitely see, with all the different systems and data coming in, the importance of AI to be able to manage and analyze all that data quickly to make business decisions. There’s also a lot of different systems outside of the buildings. You know, there’s the parking lot, parking lot lights, there’s everything inside. There may be a cafeteria. So there’s all these different systems that we want to collect data from. How do we connect all of those different systems that may not have touched each other before? Graeme, do you want to answer that one?

Graeme Jarvis: Sure, I’ll start Christina. And I’m sure Sunita has some great insight also. You hit upon a great word, “system.” I like to use a swimming pool analogy, where historically the security manager was in a lane. The facility manager was in a lane. The building manager was in another lane. And products were sold into those owners, if you will, that had a certain part of the building under their, his or her, responsibility. The way to look at this problem is really as an integrated system. So that’s why, when we talk about smart, connected, sustainable buildings, you’ve got to get the building connected, which is now happening.

And now you’ve got all of these data from these edge devices that are doing their core function—security, heating, ventilation, air conditioning, the building-management system, smart parking, smart elevators, etc. When you pull all of this together, now the benefit is you can start to figure out patterns and optimize around the heartbeat of what that building should be, given what it’s capable of doing with the equipment that’s in place and the systems that are in place. So this is a journey. This is not something that can be done overnight, but the beginning is to assess what you have. And then that’s one end of the spectrum.

And then look at where would you like to be three, four years from now from an ESG perspective. And then you have to build a plan to get from where you are to where you would like to be. That’s most of our customers’ journey today. When we do that, the assess phase is really eye opening because the data is objective, it’s no longer subjective. Well, I think this might—it’s pretty crystal clear. And then you can use AI and modeling with building twins. We have OpenBlue Twin, for example, to do “what if” scenarios: If I change this parameter, what might that do to the overall efficiency of the building? And so now you can start to harness the information that was latent, but now it’s at your fingertips. So that’s some of—that’s some of how we help our clients in that journey realization.

Sunita Shenoy: Yeah, if I can build on that, Christina, from a technology standpoint, right? In any given building there’s a disparate number of systems, right? Could be an elevator system, a water system, an HVAC system, a lighting system, a—how your computers are connected together, all of it, right? And they all come from different solutions, different companies. Our advocacy in any—we try this with multiple industries and transformations—is focus on using open standards, right? If everybody’s building on open-standard protocols, whether it’s connectivity or networking, then you are working off the same standards. So when you plug and play these different systems, you are able to collaborate with the different systems, however disparate they are, right? Share the data, bring it to a common place, information sharing on common protocols. Networking is super critical in bringing all these disparate systems together.

Graeme Jarvis: Absolutely right. For example, OpenBlue. Part of the name “OpenBlue” is “open.” We are open because no one company can do this alone. Hence, we have such a great partnership with Intel. So, open standards; we can push information to third-party systems. We can ingest information from third-party systems, all to advantage the customer for the applications that give them the outcomes they’re looking to realize. So this is actually a critical point in industry. If people that are listening to this podcast are talking to folks who have a closed architecture or a closed approach, I would just caution some pause, and think more on the open and scalable and partnership-oriented approach, because that’s where things are going. And it’s extensible with firms that are yet to present, but we would love to partner with, because they’ll have some novel capability that will advantage our customers.

Christina Cardoza: I love the point that you both made about being able to leverage some of the existing technology or systems you have within a building. I think sometimes we get a little bit too quick to jump on the latest and greatest new technologies, or to replace the systems that we do have. So it’s important to know that there are systems out there that can connect some of these disconnected systems that we have, and you don’t have to rip and replace everything. It’s still working. There are ways that you, like you mentioned Graeme, that you can work together. I want to paint the picture a little bit more for our listeners. We’ve been—there’s been a lot of great information, but I’m wondering how this actually looks in practice, especially with OpenBlue. So, Graeme, do you have any customer examples or use cases that you can provide of how OpenBlue helped a building become smarter, connected, and sustainable?

Graeme Jarvis: Sure, I have a few. I’ll share a couple. So, one is a company called Humber River Hospital. They’re out of Toronto, Canada. And what we are helping Humber River Hospital do is we’ve entered into a 30-year agreement with them to help improve their energy consumption by approximately 20 million kilowatt hours per year. And how we’re doing that is understanding their environment, layering on top our OpenBlue Solution Suite, and leveraging the built environment cadence to optimize, to refine, and then optimize around that for a multiyear engagement. So this is about a 20-year engagement.

The benefit to the client is they have a predictable financial roadmap, and they’ve got leading technology that’s going to help them realize that predictable financial outcome. And we also then help certify that they are indeed attaining those targets from a LEED standpoint and a corporate-stewardship standpoint. So that is one example.

There’s another example with Colorado State University, out of Pueblo. This is around renewable energy supplies for 100% of their energy demand on campus. And it’s a 22-acre solar array that is being completed. And then we’re overlaying our capabilities, hardware and software, and our professional services, including OpenBlue, to help them realize that 100%-green objective.

Christina Cardoza: Thanks for those examples. And I want to go back to a point you made earlier about how not one company can do this alone. Partnerships are essential to meeting our sustainability goals. So, Sunita, Graeme mentioned a couple times the importance of their partnership with Intel. So, I’m wondering what are the sustainability efforts at Intel, and how have you been working with partners like Johnson Controls to meet those goals?

Sunita Shenoy: Yeah, so there is an initiative that Intel calls RISE, which stands for responsible, inclusive, sustainable, and enabling, right? Responsible, meaning that we employ responsible business practices across our global manufacturing operations, as well as how we partner with our value chain and beyond, right? Inclusion is about advocacy for diversity and inclusion in the workforce, as well as advocacy for social-equity programs on making sure that, for example, the food is equitable in the community. The sustainability, which is the focus of smart buildings, is not just from a corporate social responsibility perspective. Our buildings and our operations, our corporate services are—have taken a commitment, which is by 2040 to achieve 100% renewable energy across global operations, as well as achieve net zero greenhouse gas emissions. And from a product standpoint, the products that Intel brings to the marketplace—our microprocessors and edges and silicon and the software—is to increase our product energy efficiency 10x from what it is today, as well as enable our value chain to employ these—this energy efficient processes so the electronic waste doesn’t contribute to the greenhouse emissions. So those are some of the things that we are doing as a corporation to address sustainability goals.

Christina Cardoza: Great. Thanks for that, Sunita. And you mentioned a couple of Intel technologies in there. Graeme, I’m wondering, you talked about the value of the partnership already with Intel, but what about the Intel technology? What are you leveraging in OpenBlue, and how has that been important to the solution and to businesses?

Graeme Jarvis: First of all, I’d be remiss if I didn’t mention, before I get into the technology, what the value Intel brings to our relationship is. It’s all about the people. Intel has a great employee base and a great culture. They’re a pleasure to work with, from their executive leaders to their field teams. So it starts with the people. So, I want to make mention of that because that’s critical. Next would be the depth of expertise that they bring to a client’s environment, especially on the IT side. This complements our world in Johnson Controls because we’re more on the OT side, but the IT and OT worlds are converging because of this connected, sustainable model we’ve been talking about in business reality.

And so between the two of us we can solve for a lot of customer challenges and outcomes they’re looking to realize that neither of us could do independently. Intel silicone hardware, their compute, their edge and AI capabilities—they really help us bring relevant solutions, either from a product standpoint, because it’s embedded with Intel compute and capability, or they actually enable some of the edge capability that we bring to our clients’ environment through OpenBlue. I also want to mention, on the cyber side, going back to the IT and the OT side, Intel has great capability on cyber IT. We’ve got great capability on the OT cyber side. When you talk to a client, they’re looking for an end-to-end solution. And so that’s another area where we’re better together and we’re better for our clients together.

Christina Cardoza: I always love that, that saying of “better together.” It is a big theme over here at insight.tech, especially working with partners like Intel. I think you gave our listeners and business owners and building owners a lot to think about as they try to meet the sustainability goals that they have. Unfortunately, we are running out of time, but before we go, I want to throw it back to each of you quickly to give any final takeaways or thoughts you want to leave our listeners with today. So, Sunita, I’ll start with you.

Sunita Shenoy: Yeah, so what I want to say is the barrier in adoption or deploying a smart building is generally not the technology, because the technologies exist, right? The solutions exist. The barriers is the people and the decision to employ the smart building solutions, right? So, we’ve learned along—we’ve learned a lot of things over the last several years since the conception of IoT and now edge computing, right? So it is very feasible to deploy. I think the mindset of people needs to shift and, as Graeme was saying, the IT and the OT worlds need to collaborate by bringing the best practices of both together to solve these deployment challenges. Look at those challenges as opportunities.

Christina Cardoza: Absolutely. And, Graeme, any key thoughts or final takeaways from you?

Graeme Jarvis: Yes. Just one, a macro one. So, it’s all around just saying that there’s a tremendous opportunity before us as we look to address the sustainability challenges that we discussed on this program. It’s global in nature, and that’s going to require global leadership at all levels to be successful. It is hard to find meaningful—work that is meaningful because it provides a good economic benefit while doing good for our planet. This call to action around buildings I think is one of those. So, if there are any people that are looking for work, I would encourage them to take a look at the smart, sustainable building sector. It is part of the new frontier. It requires a lot of different skill sets that are complementary. And if anyone is a customer on this podcast, I would encourage them to take a look at Intel’s websites for the solutions that they can afford, and take a look at Johnson Controls websites, and we would love to come and help you. So, thank you.

Christina Cardoza: Absolutely. Great final takeaway to leave us with today. And with that, I just want to thank both of you for joining the podcast. It’s been a pleasure talking to you, and thanks to our listeners for tuning in. If you like this episode, please like, subscribe, rate, review—all of the above on your favorite streaming platform. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Interactive Made Easy: Touchless Self-Service Kiosks

Self-service kiosks are on the rise in every industry, driven by demand from consumers and businesses alike. It’s easy to understand why. Customers like the convenience and interactivity of self-service kiosks. And businesses appreciate the way they streamline operations and relieve overstretched workforces.

But despite this, there remains one significant barrier to implementation. Most self-service kiosks rely on touchscreen technology—and this can be a deal-breaker for several reasons. First, there’s the issue of cost. A fleet of touchscreen kiosks or attractive large-format models represent a major investment for most businesses.

Maintenance is another concern. Post-pandemic, people are wary about touching something that so many other people have touched. For customers to feel safe, touchscreens must be cleaned frequently, which takes time and effort. In addition, a damaged all-in-one touchscreen kiosk can be costly to repair. A single dead pixel may require the replacement of an entire screen.

These are serious challenges. But advancements in edge AI and 3D computer vision are now enabling touchless self-service kiosks. This technology solves many of the issues of traditional touchscreens, which will drive adoption across multiple industries and may even usher in a new era of human-computer interaction.

Edge AI + 3D Vision = Touchless Self-Service Kiosks

At first, a “touchless” touchscreen might sound like a contradiction in terms. But the basic concept is straightforward. Touchless self-service kiosks use a deep learning-based technology known as skeleton tracking that treats the user’s hand as a mouse pointer.

Benson Lee, Chief Marketing Officer at LIPS Corporation, a manufacturer of touchless technology solutions, explains how this works:

“We create a virtual pane between the user and the display screen. An AI-enabled 3D camera tracks the user’s hand to emulate scrolling, and when their fingertip crosses the pane, it’s interpreted as a mouse click,” (Video 1).

Video 1. A touchless self-service kiosk in action. (Source: LIPS)

Of course, any system based on high-resolution 3D imagery will need to process a large amount of data—far too much to send to the cloud if you want real-time interaction. That’s why touchless displays use AI to perform their visual processing on the edge. Lee says LIPS’ technology partnership with Intel® helps make this possible:

“Intel CPUs are powerful enough to handle heavy computational workloads—and are particularly good for computer vision and edge AI applications. In addition, the Intel® oneAPI toolkit simplified the development process, allowing our engineers to write a solution driver that works on many different platforms.”

Touchless #technology makes implementing self-service #kiosks—including ones with large displays—easier and more cost effective. @LIPS_Corp via @insightdottech

Lowering the Barrier to Adoption

Significantly, touchless self-service kiosks powered by the LIPS solution are more modular than their touchscreen-based counterparts. The camera and touchless interface driver are separate from the display screen.

This means that existing touchscreen-based kiosks can easily be retrofitted by plugging a 3D AI camera into a kiosk’s USB port and installing a driver. Perhaps even more importantly, any display screen—even a non-touch display—can be made interactive with the addition of these components.

For businesses and systems integrators alike, these are attractive benefits. Touchless technology makes implementing self-service kiosks, including ones with large displays, easier and more cost effective.

In addition, maintenance is greatly simplified. “You don’t have the same burden of cleaning that you do with touch displays,” says Lee, “and if you’ve retrofitted a non-touch display screen and that screen breaks, you can replace it cost effectively—without having to replace everything else.”

From Healthcare to Hospitality

Proof in point is LIPS’ experience during the COVID-19 pandemic. For example, the company was approached by two organizations that relied on touch-based kiosks for daily operations: a quick-service restaurant chain and a local hospital.

Despite the obvious differences, the restaurant chain and the hospital had a number of things in common. They were concerned about the health and safety of the people they served. They couldn’t just stop using self-service kiosks, as these were deeply integrated into their workflows. And their staff members were already stretched thin, making it impossible to spend extra time and effort to sanitize each touchscreen after use.

To address these challenges in the restaurant, LIPS retrofitted the in-store self-ordering kiosks with their 3D camera systems, resulting in a touchless equivalent that was able to perform the same operational role in the restaurant. At the hospital, LIPS used the same technology to make the patient reception area’s touchscreen queuing system completely touchless.

At both locations, leadership was pleased with the speed of the retrofit, the ease of maintenance, and naturally, the lowered risk of transmission.

The Future of Interactivity

The LIPS restaurant and hospital deployments demonstrate why touchless technology has such a bright future. In the years ahead, expect touchless self-service kiosks to gain traction as more and more businesses wake up to their potential.

That potential is about much more than ease of implementation. Unlike many touchscreens, touchless displays are not based on capacitive sensing, so they don’t require a bare hand to operate. This means they can be used by people who wear prosthetics—a huge step forward for accessibility. It also means that people working with gloves or protective gear can use them, making the technology useful in surgical settings or industrial environments like microfabrication cleanrooms.

The range of possible use cases across many industries holds the potential to bring about a real change in the way human beings benefit from self-service systems. “Touchless technology could be as big as touchscreens were for smartphones and tablets,” says Lee. “It will make our world a better, safer, and more interactive place.”

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech

Intel® Innovation Features 12th Gen Intel® Core™ Processors

As the IoT edge matures, technical requirements mature with it. Pervasive connectivity warrants ubiquitous security. Increasing data volumes and data latency tolerances are making edge AI inferencing essential. And the rise of hyperconverged infrastructure means engineers must design solutions that accommodate both the edge and cloud.

For IoT technologists, each achievement demands another. And to ensure their winning streak continues, the annual Intel® Innovation 2022 event returns in-person on September 27 and 28.

Created by developers, for developers, Intel® Innovation combines strategic session tracks, hands-on technical labs. Attendees can also expect a partner technology showcase to demonstrate how hardware, software, and systems engineers can tackle the next phase of IoT design requirements. Content spans from AI/machine learning to open accelerated computing to security and network/edge compute, all promising valuable insights for all conference participants regardless of their end use case or where they are in the development lifecycle.

It all kicks off with a live keynote from CEO Pat Gelsinger, who will make announcements like the official reveal of the latest 12th Gen Intel® Core processor (formerly known as Alder Lake PS).

Desktop Performance, Mobile Efficiency for IoT Apps

At Intel® Innovation, attendees learn how the latest 12th Gen Intel® Core devices build on the new processor microarchitecture hybrid core innovations by blending performance and power efficiency of Intel® Core mobile processors with the LGA socket flexibility of the family’s desktop SKUs. The result is a small-footprint, multi-chip SoC that packs in as many as six Performance cores, eight Efficient cores, and 96 execution units of Intel® Iris® Xe graphics to plow through IoT edge workloads.

This can be observed in 6.6X edge AI inference and 4X graphics performance over prior-generation processors. It can also offer configuration with power consumptions as low as 12W. The 12th Intel® Core Processors also contain all the I/O to support hyperconverged, workload-consolidated embedded edge systems, and are backed by long-life availability.

Early-access partners already are taking advantage of the new benefits that come with these processors. You can expect to see 12th Gen Intel® Core Processor-based solutions in the SMS-R680E Mini-ITX board from ASUS (Figure 1), MBB-1000 ATX motherboard from IBASE Technology, X13SAV-LVDS server board from SuperMicro, among other products from Axiomtek, IEI Integration, OnLogic, and more.

ASUS mini-ITX board, which features new 12th generation Intel® Core™ Alder Lake PS processors.
Figure 1. The ASUS SMS-R680E Mini-ITX board provides optimal power, performance, and flexibility for IoT use cases. (Source: ASUS)

Edge AI Inferencing in Action

Since boards aren’t enough, many exhibitors put the new 12th Gen Intel® Core processors into power-efficient AI inferencing systems that serve a range of IoT markets and applications: 

  • Smart Retail: All-in-one POS systems based on the ASUS SMS-R680E Mini-ITX board can run multimedia and AI inferencing tasks simultaneously. And you can accelerate them even further using software like Microsoft EFLOW and the Intel® Distribution of OpenVINO Toolkit.
  • Smart Healthcare: Ultrasound imaging devices can use the improved graphics, DDR5 memory support, and PCIe 4.0 lanes on the IBASE MBB-1000, and double down with features like Intel® DL Boost to conduct AI diagnostics and run smart assistants.
  • Computer Vision: More cores, higher thread counts, and better graphics join four display pipes on these processors to form the makings of advanced CV systems. For example, the SuperMicro X13SAV-LVDS server boards can decode dozens of 4K30 video streams and efficiently conduct object detection or classification using DL Boost.

Technologies and use cases like these are highlighted not only in the product showcase but also in the event’s multiple technical session tracks.

For instance, the AI/ML track features a session from Intel and Red Hat that demos a deterministic AI appliance that leverages solutions from cnvrg.io, Habana, and the Red Hat OpenShift-based Intel® AI Analytics Toolkit. Meanwhile, in the Network and Edge Compute track, representatives from Intel and Google will discuss “Capturing, Managing, and Analyzing Real Time Data Where It Matters.”

Elsewhere, a committee from Intel, Codeplay, Julia Computing, KTH Royal Institute of Technology, and the University of Illinois at Urbana Champagne explore “Accelerating Developer Innovation Through an Open Ecosystem” in the Open Accelerated Computing session group.

Go to the session agenda to find tracks that are best for you.

If that weren’t enough, after the sessions you can start putting these IoT skills into practice by heading to the Intel® Innovation Dev Tool Shed, earning Edge AI Certifications, exploring the AI Innovation Zone, and more. With all that at your disposal, you should come away from the show with everything you need to overcome the next set of challenges IoT throws at you.

It is all right there at the San Jose Convention Center on September 27 and 28. Will you keep your momentum at full steam?

Register for the 2022 Intel® Innovation Summit today.

AI in Healthcare Advances Cancer Diagnosis

While studying advanced 3D imaging and AI in healthcare applications at Taiwan’s National Tsing Hua University, researchers hit on an exciting potential application: helping pathologists diagnose cancer tumors with greater speed and precision. They obtained licenses from the university and formed a digital imaging startup, JelloX Biotech Inc., but soon discovered hospitals were far from ready to adopt the technology.

In the age of precision medicine for cancer treatment, growth rate of well-trained pathologists is far beyond the need of diagnosis. Most pathologists still examine tissue samples by eye and take manual notes, a painstaking hours-long process. Few have made the switch to digital 2D or 3D image analysis, in part because it traditionally has required installation of costly and complicated graphics equipment.

Computer Vision in Healthcare

Despite their highly trained eyes, doctors don’t always get important details right. Tumor samples are complex—each one contains 10 to 30 parameters that must be analyzed to determine whether the cells are cancerous, how fast they are dividing, and how healthy or unhealthy they look as compared with normal tissue, among other factors.

“Studies asking multiple pathologists to analyze the same tissue sample have found 20% to 30% disagreement among the diagnoses,” says Yen-Yin Lin, Chief Executive Officer at JelloX. “This means that there is a chance that patients might receive incomplete information about their disease status, thus delaying proper treatment.”

Misdiagnosis can be very painful for patients. They might miss a good chance to use the best drug for fighting their cancer earlier or undergo chemotherapy they may not need.

To improve diagnostic capabilities without breaking the bank, Lin and his colleagues set out to create an edge solution that could quickly uncover and digest far more information than pathologists can see—without the need for installing expensive graphics equipment.

“#AI insights could help doctors improve diagnostic accuracy and develop better #treatments.” – Yen-Yin Lin, JelloX Biotech Inc. via @insightdottech

Using AI 3D Imagery in Pathology

The company found recent advancements in computer vision and AI could be used to help doctors better detect anomalies from medical images with higher accuracy. “It can assist healthcare professionals in diagnosing diseases like cancer, identifying disease progression, and predicting patient outcomes,” says Lin.

As a result, JelloX set out on a three-year development journey to create MetaLite Digital Pathology Edge Solution, which can analyze each tissue sample parameter in one to two minutes, compared with an hour using a standard computer.

To do this, JelloX needed to leverage powerful deep-learning models and annotation tools, which required equally as powerful hardware capable of deploying these models at the edge and reducing inferencing time for quick, efficient, and accurate results.

Lin explains they turned to an edge computing device powered by Intel® processors and custom AI algorithms deployed through the Intel® Distribution of OpenVINO Toolkit. This made it highly suitable for deployment on netbooks.

Intel CPUs were able to accelerate the training and inferencing significantly, provide an end-to-end deep-learning pipeline that helped JelloX apply its solutions to real use cases, and deploy their models across different hardware.

This is because OpenVINO was designed to first help to optimize deep-learning models, then deploy the model over multiple hardware devices, and accelerate the inference and performance of those platforms, Zhuo Wu, a Software Architect at Intel who works closely on OpenVINO, explained.

As a result, JelloX can now help configure most hospital scanners to work with the software, which also allows doctors to add notes as they work (Video 1).

Video 1. JelloX MetaLite Digital Pathology Edge Solution uses AI algorithms and edge processing to rapidly analyze 3D tissue samples in near-real time and allows physicians to annotate results. (Source: JelloX Biotech)

In addition to Intel CPUs, JelloX also leverages Intel® NUC based on the 11th Gen Intel® Core processors, which enable engineers to easily scale their solutions.

Pathologists can choose to review some parameters in real time and save others for later. Data from the scanner and edge device is sent to hospital servers, where hundreds of parameters can be analyzed with AI in detail overnight, with results ready to view the next morning.

AI models are trained on massive data sets accumulated from many sources. The amount of information they work with is too vast for humans to assimilate, but algorithms can quickly process it and use it to classify tissue samples and make inferences and predictions about the course of the disease.

“The interpretation of immunohistochemistry staining is a time-consuming and expensive process in pathological examinations, requiring significant time from physicians. If auxiliary tools can be utilized to improve efficiency, it can bring about the greatest economic benefits. Some parameters are difficult for doctors to categorize conclusively. When AI does calculations, it gives doctors a scale or digital ruler to use as they judge the images.”

AI insights could also help doctors improve diagnostic accuracy and develop better treatments, Lin believes, saying, “If we have good AI-assisted tools, maybe patient survival rates and survival duration will be enhanced.”

AI analysis is also valuable to medical researchers, allowing them to discover new features of cancer cells and better understand how they operate. “Algorithms can dig out more information from images and do the tough analysis, providing more information about morphology and protein biomarker features,” Lin says.

Being able to gain more efficient and accurate results with AI not only helps doctors improve patient care and service but also reduces the time and effort they need to spend on each case—which in turn allows them to take better care of more people, according to Wu.

Currently, researchers at Taipei Veterans General Hospital and MacKay Memorial Hospital in Taiwan are using MetaLite to identify new biomarkers of cancerous tissue and calculate the area of tumors with greater precision. Once the platform receives approval from Taiwanese health authorities, the hospitals may use it as a diagnostic tool.

Pharmaceutical companies may also benefit from AI tissue analysis, using it to identify which patients stand the best chance of benefitting from medications set to undergo clinical trials, especially for those requiring biomarker-guided patient screening.

Expanding AI in Healthcare with Federated Learning

As hospitals expand the use of AI in pathology, data they obtain will be used to train future AI models, increasing accuracy. And through a process known as federated learning, hospitals can now securely share image data with others while confining sensitive patient information to their own servers—a capability once considered an impossible dream. JelloX is developing a new version of its software that enables federation.

“With federated learning, data will accumulate much faster, improving the AI and increasing speed and data uniformity,” Lin says. “Using AI in pathology will drive precision medicine, helping doctors improve diagnosis and treatment, and allowing pharmaceutical companies to develop new drugs much faster.”

In fact, in its immunohistochemistry imaging solution, the company is already leveraging Intel’s open-source framework Open Federated Learning (OpenFL) to enable seamless cross-institutional analysis of images with many of its customers.

“AI is becoming more prevalent in the healthcare space due to its immense potential to revolutionize healthcare delivery, improve patient outcomes, and enhance operational efficiency,” Lin adds.

Beyond federated learning, AI is also coming to healthcare in the form of chatbots and virtual assistants—enhancing patient engagement and support. Using natural language processing, conversational AI chatbots can help collect accurate patient information so doctors and nurses can focus better on patient care, according to Wu.

To learn more about developing healthcare AI solutions, check out these notebooks: Quantize a Segmentation Model and Show Live Inference and Part Segmentation of 3D Point Clouds with OpenVINO.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

This article was originally published on September 22, 2022

Bringing Intelligent AI into the Physical World

The ongoing supply chain crisis has brought the workings of the nation’s cargo transportation system out of the shadows and into the mainstream news. It’s pretty apparent that a system that had functioned well for decades was not exactly up to the challenge the past couple of years threw at it. Modernization, digital transformation, and AI innovation was—and still is—desperately needed, or the whole country feels the consequences.

That’s where Scott Chmiel, Business Development Manager for Cloud Solutions at Arrow, a technology solutions provider; and Steen Graham, Co-Founder and CEO of Scalers.ai, come in. Between their two companies, they help customers navigate the intelligent IoT partner ecosystem—and not only in the environment of smart ports. Deploying AI in the physical world is applicable to all kinds of industries. And the benefits go beyond business to have a ripple effect on society at large.

What challenges do businesses currently face in their digital transformation efforts?

Scott Chmiel: The challenges have changed because the complexity of solutions has increased so much. In the past, everything was contained in a single piece of hardware or software, but now we’re adding cloud, we’re adding complexity, we’re adding technologies that not only require more from a tech standpoint but different skill sets from a development standpoint. Solutions now have to be integrated and deployed into existing customer environments that differ from one to another. Connected devices now require additional operational security. And, obviously, we can do things that weren’t possible before, such as machine learning and AI. It’s possible to solve business problems that we couldn’t even address in the past.

Steen, what is your perspective on these efforts?

Steen Graham: The challenge is deploying artificial intelligence and IoT in the physical world. Take the situation of a port. Obviously ports, and the infrastructure for ports, have been around for decades, and there are various existing applications that are working just fine there, but then you want to implement new technology. So how do you actually deploy these cloud-native methodologies—including artificial intelligence—on the existing infrastructure to do things like analyze efficiency and monitor CO2 emissions? Combining existing infrastructure with new infrastructure, from both a hardware and a software perspective, is critical to driving industry transformation and addressing the challenges in our supply chain.

The current federal government administration has been fantastic in supporting port modernization. But, interestingly, ports are actually managed by their local municipalities, so what those local leaders do has impacts on a national scale. Unions are also critically important to the situation. For example, one of the port jobs that has been sustained in the United States is crane operations. What we’ve automated is the front-end part—removing the containers from the ship—but we still have heavy investment in these human-performed, union-based roles in loading and unloading the trucks. So those three parties: The federal government, the local municipalities, and the unions are all incredibly important in this current crisis.

How do businesses go about making impactful technology changes?

Scott Chmiel: The first step is understanding what business outcome they’re seeking. What are they trying to accomplish, and who are the stakeholders? In the example of the Port of Los Angeles, there’s not just one company; there’s the municipality, the people handling the containers, the truck drivers, dozens if not hundreds of subcontractors—who all have to dance around each other to run the port. Our solution focuses on their challenges around safety, as well as just tracking in and out.

Combining existing #infrastructure with new infrastructure, from both a hardware and a software perspective, is critical to driving #IndustryTransformation and addressing the challenges in our #SupplyChain. @Arrow_dot_com and Scalers.ai via @insightdottech

Steen Graham: To answer the second part of the question, what Scott and I looked at was a no-compromise solution. From a simplistic, operating-system perspective, there are two pervasive operating systems in the world: Windows and Linux. Cloud-native workloads in modern AI applications are written in Linux, whereas a lot of existing workloads and applications have been written in Windows. By adding cross-platform capabilities to some of these technologies we’ve been able to retrofit the AI applications on existing infrastructure to make sure they work better together. Layering on modern cloud-native attributes and AI capabilities was really the approach we used in this particular solution.

What’s driving this cross-platform interoperability?

Scott Chmiel: Often it’s the existing hardware. And the technology, the infrastructure, can be applied to many different solutions—whether it’s a retail application, within a smart port, or in a warehouse—all the same types of challenges are there, and the same technology can be used and customized or repackaged. It brings additional value to the existing hardware they have, and adds value to it with things they couldn’t do before. In the example with the smart port, it was adding safety, and that’s applicable to retail, too: Before a crane moves through a warehouse, you want to make sure wherever it’s going is clear of people who might be in its way.

Steen Graham: From a technical point of view, we were given a gift—notably by Microsoft and Intel®—with the underlying technology. We use the acronym EFLOW, for Edge for Linux on Windows—or, more accurately, Azure IoT Edge for Linux on Windows. That is what gives us that no-compromise capability across Windows and Linux. And the hidden gem there is that Intel invested in hardware-acceleration capabilities via its integrated-graphics capability that allow us to do these workloads on deployed Intel-based CPUs without having to upgrade to expensive GPS. Now we can run multiple AI models, multiple camera feeds on affordable, off-the-shelf technologies like Intel’s NUC platform, and Windows and Linux as well. It’s an incredible array of technology that allows us to deploy these modern workloads and make sure they’re interoperable with existing infrastructure.

How is EFLOW used in the port example?

Steen Graham: The EFLOW technology was only released late last year, so we are still in the engagement phase. From a business-outcome perspective, the problem that we were trying to address was the bottleneck associated with turn times: the operational-technology metric of how fast containers can be loaded and unloaded. So how do we optimize the turn times of those cranes? How fast can they be loaded and unloaded? How do we make sure the truck is in the right place at the right time? All while providing an enhanced safety experience for the workers on-site. And we are also tracking CO2 emissions, so another metric we’re looking at is how efficient the hybrid cranes are that many ports are using alongside their diesel cranes.

What other use cases or challenges might EFLOW solve?

Scott Chmiel: There are lots of opportunities: Transportation, industrial, and retail are a few different verticals. I know there’s a strong focus on retail from both Microsoft and Intel: The opportunities are there to do workload consolidation—consolidation of surveillance and point of sale, where one machine could do both. Or there could be new services that couldn’t be done before; once you have a visual element with the transaction, what kind of value can you generate out of that?

The code, the underlying technology, can be repurposed for any of those verticals. A lot of the work has already been done for them with the accelerators and the tools that Microsoft and Intel with OpenVINO have provided.

Steen Graham: Healthcare is another possible industry. If you look at medical-imaging equipment, such as ultrasound, a lot of ultrasound vendors are Windows-based applications, but they’re looking to add new AI-based features. An example is that anesthesiologists occasionally have challenges finding veins in their patients. You could use ultrasound equipment to determine with accuracy the location of the vein. You take existing Windows-based ultrasound equipment, and then overlay modern deep-learning.

We’ve also seen an incredible demand in using computer vision to do defect detection in the manufacturing process, and I think that’s an incredible use case. If you do in-line AI defect detection, you can find the products that are having quality issues earlier in the manufacturing flow. And if you address those problems earlier in the flow, you actually end up using less fossil fuel to run through the rest of the process.

Can you talk about the partnerships that go into this process?

Steen Graham: Arrow is always looking to figure out how it can make one plus one equal three across its partnerships. So Scott came to us with an incredible idea about showcasing the value of this underlying EFLOW technology, and we were able to take technologies from Intel and Microsoft—and a number of open-source projects as well—to build that solution code. Where Scalers comes in is in really understanding how to fit all those things together into a high-fidelity enterprise AI solution, and then providing that solution, as well as building the custom AI models for deployment.

Scott Chmiel: Arrow calls itself an orchestrator and aggregator—whether it’s bringing the different technologies, services, or components together, or helping out with design. It’s hard for one company that has a vision or a challenge to have the all resources or the skill sets in-house to do everything for an end-to-end solution. So what Arrow looks to do is work with that end user and bring in appropriate partners. We help them pick the right solutions, not only for their end use but looking at the longevity, the overall life cycle, of that solution as well. Smart ports—that’s not something that’s going to be deployed and done within the course of a couple of years. And it should also be something that’s repeatable. The company that’s developing that solution, or that is bringing these pieces together, can reuse it and create more scale, create more value across the ecosystem. 

Is there anything else we should know about EFLOW or this topic?

Steen Graham: I think as we talk about the cost of development and software engineering, it’s incredibly important that we write the code to integrate these partnerships. There are so many incredible companies with great technologies, but what many times is missing is the single line of code that connects the APIs to really drive transformation. As an industry, we really have to come together on the deployment challenge, because building capabilities in the cloud is fantastic, and it’s really affordable and easy to do these days. Where the challenge occurs is deploying it in the physical world, and the continuous learning, the transfer learning, the continuous annotation requirements to do that.

And, finally, although we’re getting really good at synthetic data and creating AI models with small data sets, if we really want to move society forward, we have to be able to build models with high fidelity on good data sets. And we have to do it with explainable AI, so that we know why it’s making its determinations in order to make sure it’s as inclusive as possible, as well as accurate.

Scott Chmiel: I’m always amazed when I talk to companies in specific verticals—whether it’s somebody running a warehouse, somebody in a port, somebody in surveillance or the medical industry—the amount of knowledge they have about what they do. Their particular solutions are amazing. And as these solutions get more complex, I want to make sure people understand that there’s no need to go it alone. We’re no longer in the days of building a device that does one thing—it’s not just an MRI that does visioning; it’s how it integrates with the whole hospital. But companies don’t need to figure that out alone. And they really can’t do it alone with these more complex solutions. The bar is moving down for what can be done; it’s amazing the business solutions that couldn’t be solved in the past that now can be.

Related Content

To learn more about EFLOW, listen to the podcast Fast Track Innovative Apps: With Arrow and Scalers.ai. For the latest innovations from Arrow, follow them on Twitter at @Arrow_dot_com and LinkedIn at Arrow-Electronics.

 

This article was edited by Erin Noble, copy editor.