Self-Service Tech Trends in Retail, Banking, and Hospitality

[podcast player]

Self-service kiosks are changing the way consumers interact with retailers, banks, and the hospitality industry. Consumers no longer have to wait in line to get a rich, engaging, and personalized experience.

With new technological innovations, businesses now also have an opportunity to learn more about what their consumers want and implement new multichannel strategies.

In this podcast, we explore the changing role of self-service kiosks across all industries in 2021 and beyond with Dylan Waddle, Chief Operating Officer for global provider of kiosk solutions M3 Technology Solutions (M3t); Stephen Borg, Group Chief Executive Officer for AI technology company meldCX; and David Frei, Vice President of Strategic Partnerships for worldwide kiosk manufacturer Pyramid Computer.

Join us as we dive into:

  • The growing interest in kiosks
  • The impact COVID has had on adoption of kiosks
  • Current and upcoming use cases for kiosks
  • The role of machine vision to create a seamless and connected experience
  • How businesses can get the most out of the kiosk experience
Apple Podcasts  Spotify  Google Podcasts  

Transcript

Kenton Williston: Welcome to the IoT Chat, where we explore the trends that matter for consultants, systems integrators, and end users. I’m Kenton Williston, the editor-in-chief of insight dot tech.

Today I’m talking about self-service trends with a panel that includes Dylan Waddle, Chief Operating Officer for M3t, Stephen Borg, Group Chief Executive Officer for meldCX, and David Frei, Vice President of Strategic Partnerships for Pyramid Computer.

Our guest serve a huge range of industries—everything from quick-service restaurants to banks. Across the board, they’ve seen a massive surge in demand for kiosks as COVID upended nearly every aspect of our lives.

So what’s coming next? What lessons can businesses carry forward as the world gets back to normal?

But before we get to those questions, let me introduce our guests.

So Dylan, I’ll start with you. Welcome you to the show.

Dylan Waddle: Yeah. Thank you for having me.

Kenton Williston: Can you tell me a little bit about what M3T does, and your role there?

Dylan Waddle: Sure. M3T is a provider of fully integrated kiosk solutions. We started 15 years ago developing back-office management systems for banks, casinos—people that were managing a lot of physical currency. That evolved about 12 years ago into manufacturing kiosks—including software, hardware, deployment. We touch about every industry vertical. All of our customers have a unique perspective on how the kiosk plays into their infrastructure.

Kenton Williston: Excellent. So, Stephen, I’d like to welcome you to the show.

Stephen Borg: Yeah, thanks for having me. I’m the CEO and Co-Founder of meldCX. We’re a software business; we also design hardware and work with partners for these integrated experiences. Our focus is to deliver software that drives premier customer experiences using AI and Edge technologies—being kiosks—and working closely with our partners to deliver them.

We’re across multiple industry sectors. And we’ve got a base in the US, Europe, and Australia.

Kenton Williston: Wonderful. So, David, last but not least, welcome. Thanks so much for joining us. And can you tell us a little bit about Pyramid and your role there?

David Frei: Sure. Thanks for the introduction. And also having me here. I’m VP of Strategic Partnerships at Pyramid. And Pyramid is already in the market for almost 40 years. And we are an IT hardware manufacturer, with focus on security server and self-service kiosks. We are doing business in a wide range of different segments—including retail, restaurant, grocery, healthcare, but also building access control.

And we are a global supplier for many very prominent brands, including Lidl, Adidas, or McDonald’s.

Kenton Williston: So I want to get right into the topic of today’s conversation about self-service in all kinds of different industries. So I would like to hear your thoughts on what is driving the emergence of these kiosks, and why they’re so popular.

So, Dylan, let me just start with you there. One of the things I think is an obvious driver behind this is during the pandemic, of course, there was a lot of anxiety and even regulations about doing business as usual. And a real need to find more hygienic, healthy ways of serving folks. And so self-service kiosks became a huge part of that.

What I’m wondering is how you see that trend continuing as we move now into something that resembles normal.

Dylan Waddle: So, yeah, absolutely. We’ve been talking with senior executives from retailers around the country. Lululemon was one of the most recent to comment, and one of the things they’re heavily focused on is what that in-store experience feels like post-COVID.

And so they’re thinking about inventory levels; what the consumer experience looks like; what happens from the time they walk into the door to the time they leave. How many employees can they have in-store? What’s going to be safe for the consumer? And I think you’re exactly right: COVID has played a major impact on the adoption of the kiosk solution.

So, pre-COVID, kiosks were seen as more of a convenience, but not necessarily something that every store had. And then, post-COVID, we feel like the retailers, banks, et cetera, are heavily focused on kiosks and the role that they’re going to play for the consumer. They’re thinking a lot about, what does a consumer-service representative do when you’re in-store? Do they help you with your transaction initially on a kiosk? And using tools we offer, like wayfinding, purchase, paying for things right there on the terminal.

It’s really a complete rethought around the consumer experience in a retail establishment. And, like I said, banks, retailers—pretty much any kind of experience—is going to begin with a kiosk. People are a lot more comfortable with interacting with a kiosk, as long as the flow is simplified and they feel like it’s a simple, easy way to conduct business and figure out what they need.

Kenton Williston: Yeah, absolutely. And I love the points you’re bringing up there about how critical kiosks have been in the retail sector. And I totally agree with what you’re saying there. But, of course, that is not the only use case.

So, David, I’m interested to hear from your perspective what kinds of use cases you’re seeing, and how those are changing. And, for that matter, in different sectors what are the care-abouts that measure success?

David Frei: In the past few years, kiosk really had been established as one preferable digital channel  when it comes to upselling and queue busting in pioneering industries like hospitality and retail. Speaking KPIs, there are great reports available that prove upside potential, for example, of 30% basket size, and also waiting time reductions of minimum 20 seconds per session.

So, to Dylan’s point, that’s really pre-COVID. However, in the previous year, not least due to COVID, there were new use cases, and especially new value focuses arising. One of our biggest successes had been in the temperature-measurement and guest-screening environment. So this is really meant to provide a new layer of protecting visitors’ or staff’s health while entering in any kind of building.

So, no need to mention that health has become, and hopefully always had been, the most important goal and KPI in that sense.

Kenton Williston: So, Stephen, let me ask you, this all sounds lovely, but I think a big question mark in my mind is—okay, so we’ve got clearly a huge burst of interest that sounds like it’s going to keep going for a while in these kiosks. But it’s going to demand a lot, I think, in terms of new hardware, new software, new connectivity to back-end systems, and all the rest.

And I’m really curious. As a software provider, how do you see the ecosystem behind these kiosks changing? And, in particular, how you see the industry responding to demands for new technologies like machine vision.

Stephen Borg: Yes, it’s quite interesting, in that, as we started from a software platform helping customers either execute more quickly or helping partners, what we quickly found, especially around machine vision, was that the integration between software and physical-device design needed to be much more tightly integrated.

What we do is help design that service, or work with partners to do that. That’s one of the biggest changes, that when you’re going down machine-vision path or AI path, it’s a lot more precise when you come to design and lighting, and all those aspects.

But what we’re finding most of all is our customers are trying to replicate, or create, a better customer experience. It’s not just about pulling people through quicker, or line busting, or those traditional use cases. We’re finding they—usually the checklist we get is, “I want to create a richer, more engaging experience, while minimizing the amount of touches. But I want it to be more personal.” So we get this contravening checklist.

And we’re finding use cases as broad as retailers coming to us saying, “We want to take initiatives, such as reducing wastage in grocery, or reducing prepacks; how do we do that while making it a better experience in other ways they would have?”

So we’re seeing machine vision playing a really big part in those applications that make that whole process seamless. We’re seeing it in postal services, where it can be quite complex—sending a parcel and making sure you fill out your forms right—and doing that on kiosk may take longer. So we’re seeing machine vision taking key parts there and filling the gaps. So if someone’s filled it out, it does handwriting recognition and automatically detects the destination and what else we need to know, or verifies the address. So it cleans the data on the way through to ensure your parcel gets there.

We’re seeing machine vision used in—to connect an experience. We’re doing a retail bank right now, where through tokenization it distinguishes your skill level in using that kiosk. So it can go straight past any instructional content and get you right to the point, because that’s your expectation—once you’ve used it once or twice, you want that interaction to be quite seamless.

And then we’re seeing—we’re doing some work right now with a big bank in Australia; we’re doing some key work with the Marriott Group. And they want to create a universal premium experience, regardless of what brand, and bring that experience down to a kiosk device—not just for check-in, but check-in, valet, any services that you typically require, and you have a choice to experience that on a kiosk.

So we’re finding more and more machine vision and AI applications being brought into these environments. A perfect example is for hotel check-in: some countries or some regions, they require identity checking, or verification of COVID passport. Or, in Australia, you have actually passports or border passports between states at certain times—verifying that you have that before you check in.

So all these types of things are driving the need to tightly integrate AI or machine-vision solutions back into kiosk applications.

Kenton Williston: Really interesting to hear all these different use cases of machine vision. One that I really liked in particular was this idea of a kiosk understanding if you used it before or not to give you the pro-mode experience. That’s really, really cool. And, just across the board, I’m hearing there are a lot of really interesting use cases in how people engage with kiosks, and the kinds of interfaces that are coming to the forefront.

So, David, I’d love to hear from your perspective a little bit more how your customers are using these new technologies to create more inviting interfaces and better personalization. So, what are you seeing?

David Frei: Yeah, so Stephen made a really important point with personalization is what also our customer drives. And with respect to innovative POS interfaces that you’re referring to, we indeed field tested new technologies, including as for example, gesture control, eye tracking, and even voice interaction during the pandemic last year.

And all was in the context of drive-through, click-and-collect, and self-ordering—mostly in the restaurant environment. So, unfortunately, I have to say, none of them really established as an alternative for the touch interface, which is still the most intuitive, I would say. However, these attempts have proven to provide interesting user information, which can be leveraged to improve upselling, speed at the point of sale, and—especially—customer loyalty.

Here, as Stephen said, it is really all about personalization. As, for example, there are interesting conclusions our customers can draw if you analyze the items that a guest is looking at standing in front of the kiosk, and whether he purchases these items or not. So you then can use the data, and presenting this—so we call it items of best chance—to purchase next to all following customers with similar demographic structure, including gender or age.

Kenton Williston: One thing you did mention that I think is important there, is the question about how readily you can actually move away from a touch interface, right? That’s something that people have been very interested in, for many reasons. One of the obvious ones being health and safety—some folks are doing different things, like just taking the tack of going with antiseptic coatings for the kiosks instead as a different way of keeping them safe and healthy.

So, Stephen, I’m wondering, from your perspective, what else people are doing to make sure these high-traffic kiosks stay clean and safe to use. And if there’s any other really important trends you’re seeing in the retail space in terms of how folks are reconsidering how these things should be assessed and interfaced with.

Stephen Borg: We found similar things. The first thing we did was open our platform to a lot of different things, such as eye tracking, some finger tracking— all these things where you don’t have to physically touch the device. What we found was that the end user wasn’t quite adapted to that. And so it didn’t create the best experience.

So we found two main trends. Antimicrobial was one of them, but we found two other areas, which we found quite interesting. One was, we had demand, and we’ve made these products available to everyone, regardless of kiosk brand or type.

We created a piece of AI that allows you to heatmap touched areas on a kiosk—it uses a combination of the pressure sensor and touchscreen and if there’s any physical cameras in the touchscreen—and it allows you to create a complete digital manifest of areas that were touched. Because based on research we did and conducted with customers, they found that they were concerned about cleaning—making sure that the kiosks were cleaned, and also were cleaned for long enough in the correct areas, which is critically important.

So what we’ve created is an AI tool that sits in the background and keeps a manifest of everything that’s touched or interacted with. You can set thresholds at a corporate level, and it would message a local attendant, or it would even stop the kiosk being used if it hits a threshold until it’s cleaned. And then it goes ahead and creates a complete digital manifest of who cleaned it, when.

So once you put it in that mode, it shows all the heavy-usage areas, and you literally have to rub them off. So if there’s one area in a touchscreen that’s heavily used, it would literally make you rub that out. It’s like you’re rubbing an eraser—rubbing up pencil. And we found that to be hugely popular, because it gave our customers confidence that their staff on-site were cleaning appropriately. It gave them a full audit of their activity for cleaning.

And it gave some customers confidence, too, in certain areas such as airports, where you could see that: this is a high-usage kiosk—go to the next kiosk; it hasn’t been cleaned yet. And that worked really well.

Kenton Williston: That’s interesting. Yeah, that’s a good point—tracking the multiple benefits, both in terms of giving the customers a comfort level and also helping employees ensure that things are clean. That’s really cool. And I love that cleaning part—it almost sounds to me like a sort of bizarro video game.

Stephen Borg: We gamified it, right.

Kenton Williston: Yeah, exactly.

Stephen Borg: You’re rubbing it off the screen, and making sure every inch of the screen is rubbed out. And if it’s a high usage area, it will make you do that quite intensively, based on some algorithms. So we know that’s clean, right?

And that was actually started by a customer that had an outbreak in Australia that was actually regular -leaning their kiosk. But they weren’t taking attention to the other devices that are on the kiosk. So they were cleaning the screen, but they weren’t cleaning, say, the PIN pad. So this would create a process flow and say, “Clean PIN pad now,” accepted on the screen.

And then the last thing you’d clean is the actual screen. And I’ll show you where those points are. We found that’s hugely popular, and it’s been picked up by a few hospitals as well.

Kenton Williston: And a great use case, again, for machine vision there, to see what’s happening beyond the screen. So, Dylan, I want to give you a chance here talk about financial services. And one of the things that I immediately think about when I think about financial services is a relatively conservative space that has to deal with a lot of regulations. And it’s not necessarily on the forefront of technologies. But despite that, they’ve had to deal with all these same issues as everybody else had to deal with during the pandemic.

So, what are you seeing there? Are there some innovations in terms of the interfaces and user experiences people are achieving in the financial services sector?

Dylan Waddle: When I think about financial services, like you mentioned, I always think two steps behind, okay? I definitely believe that, to Stephen’s point and David’s, touchless is the future for these types of services as well. However, banks and financial credit unions are just much slower to move to the latest and greatest technology.

And so what we see from our perspective is more of a limited-touch version, right? So we reduce the number of touches per use. We also focused heavily on identity authentication. So, through the use of facial and voice recognition technology. That’s including a one-time use code that’s sent to your cell phone. We truly believe that once we authenticate your identity as part of the initial financial-services transaction, we can reduce the number of touches.

The other hurdle that comes into play is data storage, right? So, certain governments restrict the amount of data that can be stored, and then consumers have to be given the ability to decide if they want their personal data stored for use of more advanced technology. So we’ve got to give the consumer the flexibility to decide that.

We also have to deal with things like ADA compliance, right? So, Braille and things of that sort must be included, either on the physical PIN pad or on screen. We’ve seen some new technologies around that, where they actually print the Braille on the screen, so you can feel it when you’re entering your PIN code.

So there are a lot of different pieces that come into play when you think about—how do financial services take the next step? How do they move down the road? So they are anxious to advance, but at the same time they’re also extremely conservative.

Kenton Williston: And, of course, it’s not just banks, right? I mean, financial services is a lot of other things. So, I know, for example, one of the interesting applications you have is converting cashless into cash payments via a kiosk. So can you give me a little more detail about that, and give our listeners a little more detail about that?

Dylan Waddle: Yeah, so that’s a unique feature that M3T offers—is really allowing consumers to insert cash. We live in the world where we’re moving from a cash society to a more digital payment–type society. So we live right in that space.

And one of the things we really specialize in is accepting cash, loading those cash funds to a digital wallet—whether it be on your phone or a physical card. We actually have the capability of issuing the physical card right at the kiosk, so you can load that card. You can also issue change, if you want to give someone change from a transaction by loading the physical card.

We see that in open-loop use cases, as well as closed loop. Closed loop being like for public transit, where the card is only used for that one specific purpose. Open loop being more like a branded card—a branded MasterCard that can be taken anywhere and used.

Our kiosks have the flexibility of allowing you to return to the kiosk, stick the card in, and remove your cash funds from that card as well. So, yeah, it’s all about consumer flexibility and driving that consumer experience for the future.

When you think about putting cash in, once the cash is inserted in the terminal, it’s almost like the sky’s the limit to the functionality. But we’re doing things like bill payment for cities, right? Like in the city of Austin you can insert cash in the kiosk, you can pay for a permit. If you’re going to build a deck on your house or something: insert cash, pay for that permit.

Other cities, like in California, they’re taking cash for paying your property taxes. Consumers want to come in paying cash for that purpose; they want to be able to give the consumer change back and/or issue change on a prepaid card. So that’s really where those kiosks live, and that market for us is growing significantly.

Kenton Williston: Well, I’ve got to say, as a California resident who actually just bought a house here in the last couple of months, the idea of paying property taxes in the amount of taxes I’m going to be paying here, in cash, blows my mind.

Dylan Waddle: Yeah. I completely agree with you. I was absolutely blown away with the amount of cash that people are spending on property taxes; it’s millions of dollars a month.

Kenton Williston: Wow. The other thing you mentioned there—this also is something that Stephen brought up—was about personal data. And something that Stephen mentioned—and this is why his name popped into my head here—was the idea of tokenization, right?

So this is something I think has been huge, huge, huge in the kiosk space, is being able to recognize repeat visitors in a way that is anonymous and doesn’t store their personal data. Is that something you are seeing in the financial services space as well?

Dylan Waddle: Yeah, absolutely. I would say they’re one of the very first industries to move toward tokenization. And the level of compliance we’re required to maintain on an ongoing basis—tokenizing the data is really the only ultra-secure way to do it. Encrypting the data was sort of like step one, storing a token using even a third-party tokenization service.

A good example of that would be like Trustly, where they provide a token so that you’re not actually storing that data. We heavily believe in that, because we’re trying to provide, in essence, another level of security for personal data.

I think the consumer is still a bit behind. They’re still trying to understand—how is this data being stored securely? And, at the end of the day, they see on television where there’s been data breaches and their information has been stolen. So people are nervous, and rightfully so, about having their data stolen. It’s always a grind between: do you want the technology to provide you ultra-convenience and let it store your data? Or do you want to go through a more manual process?

And so I think, from a financial services perspective, we’re pushing heavily in that direction. But we’re also mindful that some consumers are ultra-conservative about letting technology store data. And we certainly understand that. I think that from financial services—yeah, that’s going to be—it touches every single person’s life as we go forward.

Kenton Williston: Absolutely. So, David, I’m curious what you’re seeing with your customers. There are a couple things here I think are worth diving a little deeper. So, first—what you’re seeing in terms of tokenized or otherwise pay-by-face and loyalty programs—what are some of the big trends you’re seeing?

David Frei: Well, obviously, loyalty programs are a huge topic for all of our customers in multiple segments, not just restaurants. There’s that golden rule: nothing new, that there’s no higher cost, and the cost for a new customer acquisition, right?

So when a customer asked me, “How do we get our existing customers back to our store?” Well, then I only can answer that the best way is you having a fundamental multichannel strategy and the knowledge about your customers preferences, which you then can trigger through multiple channels, right? And what would be the easiest way to remember your customer on-site.

Some of our clients, we would say face identity. So putting the whole tokenization and GDPR topic by sight, facial recognition, is an incredible, interesting application field, where we already also tested pay-by-face, where you have biometric match. One, first identifying process, and then you’ll always be remembered, not only in front of the kiosk, but also, for example, with digital signage.

And then also personalized menu adjustments. So if you don’t want to give your full set of data, it’s also enough that the system knows your demographics to adjust the whole menu board and make it more relevant for you, and also easier to do the whole checkout process. And what we also tried in the past, which is very efficient, is mood detection. So depending on the mood of the user we can then offer the most relevant products.

Kenton Williston: How would that work in practice? Can you give me an example?

David Frei: We tested a software which basically detects very significant moods, like if the customer is smiling, if he’s not smiling, if he’s a group of people, or a single person. And then, depending also on some other information—such as the weather outside, et cetera—there’s an algorithm which pretends to give the best offers to those special conditions.

Kenton Williston: Got it. That makes sense. And of course, Stephen, as you’ve already pointed out, the interfacing with the customer is not the only use case for things like machine vision and AI more broadly. Can you tell me a little bit more about the way you see your customers are using machine vision for these sort of applications?

Stephen Borg: So, we’re finding that there’s really three areas to use AI on a kiosk. That’s one: keeping the kiosk operational, and I guess we’ll discuss that later. And the other two really, as the other guests are talking about, is customer identification. So we do what we call anonymous. So even though it’s tokenized, it’s still anonymous—we process on the Edge.

So we work closely with Intel to extract any personalized data at the Edge before it goes to the cloud. And we do some unique things because of that. So, if a customer comes up to the kiosk, we’re using not only detection so we can create a token—we don’t typically use face; we don’t want to store that data—we use a whole lot of different things to create a manifest, right down to—we have a customer that wants to detect the type of handbags females are holding when they’re interacting with their devices. So they know what is their spending capacity, which is really interesting.

So that’s in a shopping center. So they’ll know: Do I arrange in a Gucci into this shopping center? Or is it a Coach? So they’re the type of things that we’re starting to look at.

And, more interestingly, are you talking about product recognition. We’re seeing more and more customers, especially in grocery—we’ve been seeing in other areas where they want to reduce waste, they want to be more conscious, not only wasting packaging, but people only taking the portions they need, right down to food or soaps or coffee, or all different things.

We even have a customer that’s doing it for nuts and bolts. So we all use some deep learning, detect the device or detect the object that’s in there—even through bags—and let the kiosk know what that is.

So the customer has a very seamless experience; the customer doesn’t need to enter a PLU code or a barcode or scan anything, they just put the object on there, and it’s remarkably quick. We’re finding that being a key area, and then overlaying that with different forms of recognition.

So, recognizing labels as well, to make sure there’s certain compliance—we’re finding that on deli and meat products. We’ve got a customer that will make sure a client doesn’t leave with something that’s out of date, or very close to near date. All those type of things you’re starting to see come through to make that shopping experience more convenient, but have other goals—either being environmental or waste or food safety.

Kenton Williston: So, a lot to unpack there, but one thing that did stick out to me that I would like to revisit with the rest of our guests. You mentioned the way that you’re working with Intel—and I should mention, in the interest of full disclosure, that insight.tech, the program in this podcast, is produced by Intel.

But, Dylan, I wanted to put this question to you, building on that thought about the Intel technology that I know all of you are using inside of your kiosks. One of the things I think that’s important about that is how it can enable your kiosks to be good citizens from a corporate IT perspective.

So I’d love to hear your thoughts as businesses are thinking—How can I get the most out of my kiosk? That is not just what the kiosk does by itself, but how it resides within the larger IT infrastructure. What do you think are some of the most critical considerations there?

Dylan Waddle: Yeah, so when you really think about how a kiosk lives in a corporate IT infrastructure, most of that discussion is around monitoring, maintaining, applying patches correctly. From an Intel perspective, the OpenVINO tool is a pretty amazing tool for supplementing a remote-connection tool to allow you complete access to the BIOS and give you the complete flexibility to even restart the computer when it’s down.

So, to plug Intel’s product, the OpenVINO solution is amazing for that purpose. We do that in concert with—we call it our kiosk-management system, or route-management system—providing a real-time view of all the terminals that are deployed in your network. So it really comes down to the merging of the IT initiatives with these terminals, and how that gets maintained and handled correctly.

The other piece that goes in concert with that is the encryption of the data really on card-ins, and then the tokenization of that data, so that you’re completely securing the data all the way from the terminal through the network to the processor and back.

We just feel that Intel has provided a leg up from that perspective. And we leverage as many other tools as we can. And including, as I mentioned before, tools like facial recognition, voice recognition—their real-sense solutions are absolutely incredible for those features. And they provide, from our perspective, a plug-and-play solution that we love using.

Kenton Williston: Of course, it’s not just the integration with the very broad picture IT infrastructure—there’s also, of course, whatever happens to be on-site, particularly if you’re in a retail or QSR environment, there’s going to be lots of other electronic equipment on-site. One of the things we’ve already talked about was transporting information between kiosks and digital signage, right?

So there’s a good example right there of how, even locally, in more of an Internet of Things sense, these kiosks need to play well with other devices in their local environment.

And so, David, I’d like to hear from you what you see happening in terms both of just what is the trend of kiosks integrating more with other on-site devices, and what needs to happen for that to be done effectively.

David Frei: Yeah, so integrating in existing infrastructure is definitely key. As mentioned, a kiosk is necessarily only a piece of the digital puzzle; you have the greatest effect when integrating real seamlessly into existing infrastructures. And that could be like an ERP system, which contains all article information, customer information, et cetera. Or in the restaurant environment, the existing POS, which still transfers all the, again, article data, but also the whole payment processing.

But there are also other components such as the Web, mobile, or delivery piece of the digital puzzle, which you nowadays really need to have the whole multichannel or omnichannel approach.

Also, not to mention the on-premises data processing, which simply requires a specific server infrastructure. All that is, of course, a pretty comprehensive journey.

Kenton Williston: Absolutely. And, Stephen, how about from your point of view on the software side of things? What should folks be thinking about in terms of incorporating stuff like the out-of-band management, and if there’s any other out-of-band management technologies you think should be front of mind, as well as integrations with the larger software universe and services universe? Things like Salesforce. What do you see as being the critical considerations for those elements?

Stephen Borg: We say there’s multiple layers of how to interact with the kiosk, right? So, there’s your customers that interact with your kiosk; your associates that may interact with the kiosk—they may have to refill printers or printer paper, or cards if it’s hotel check-in—right down to information that gets back to the support and help desk services.

So we’ve taken that approach. And we’ve actually worked with Intel on multiple solution-ready kits for other kiosk manufacturers. They go to the level of using AI tools at the endpoint to look for the most common areas of failure, and avoid the need to either interrupt the transaction or have support services called.

So I’ll give you an example. There’s a few legacy-payment terminal types or some merchant acquirers that might have a situation where the PIN pad gets out of sync with the kiosk software itself. In that situation, that might require a hard reboot, or for the PIN pad to wait for a timeout, which ultimately creates a bad experience for the customer. They’re in a state where they don’t know if their payment’s being processed; they don’t know how to move on.

So, in that case, we have six different layers of what we look at—AI timers, and that looks at various operations at a kiosk and automatically intervenes. And either not only just reboots the device—because we think that’s not a great experience—but might cut power to a PIN pad and re-engage that power while you’re in that transaction so you can continue on. Or it might cycle a card reader—or all those things that you would typically do when you’re calling a support desk, it does that automatically. And sometimes seamlessly.

So we’ve been heavily focused on that, because our mission is a seamless experience—not only for the customers using it, but for the customers deploying it. One of our first feedbacks we got when we surveyed customers was the operational costs of a kiosk can be quite, quite huge if they’ve made the wrong decisions or haven’t considered these aspects.

So that’s one area that we’ve been heavily focused in. And Intel’s been fantastic, giving us access to tools and getting our kits ready-to-market so others can use them. And then, further, we’ve created some universal, I guess, APIs to not only common peripherals, but over 3,000 integrations to your things like Salesforce, ServiceNow. So customers can just easily take their API token, apply it, and they’re ready to go.

So we had a recent customer that wanted to send all front-end communication—so, all communication to associates, such as a card reader stack or something that an associate can check in the middle of a transaction that’s been aborted multiple times, those types of things—it would send it to that attendant, they’d accept, and they’d go check out the kiosk.

And then, for things such as preventative failure, we’re noticing that—I don’t know, there’s a bit more drag on the card reader, or the touchscreen’s not as responsive as we would like. We send some predictive failure analysis back through, something like a ServiceNow, and automatically create a service ticket. And they can choose to close it, or they can choose to have it as an outstanding ticket when their next service personnel is in the area.

So we’ve really looked at the operational costs and the way in which it’s managed from an IT point of view, but also from an attendant point of view.

Kenton Williston:

Very good. So I see we’re getting close to the end of our time—only just scheduled time. So, Dylan, I want to give you a chance—is there anything you would like to leave as a key takeaway with our audience?

Dylan Waddle: So, I know we’ve certainly talked about a lot of different technologies and how they apply to the kiosk industry and the kiosks themselves. I would just like to say that, as you consider the future for your establishment—whether it be retail, banking, gaming, hospitality—just to please consider the crucial role that kiosks play in supplementing your organization.

And also, really, I think melding the online digital experience that consumers are having on their phones, and how those two things come together as they enter your store, your establishment of almost any type. And these things play a big part in reducing inventory, providing an overall greater customer satisfaction when they’re in the store. And really trying to simplify the content that you’re providing when they’re in the store as well.

We just believe the kiosk plays a major role as we go forward into the future. So we would just encourage people to take a look at them, consider them, think about how it works in their environment. And we’d be glad to discuss it, as I’m sure my colleagues on this phone call would be as well. We’re happy to help any way we can.

Kenton Williston: Excellent. And, Stephen, is there anything you wish I had asked you that we didn’t quite get to?

Stephen Borg: Yeah, I think the one thought I’d leave is when we talk to customers now, especially about kiosks, it’s so advanced, and there’s this perception that they’re very transactional. We really start the journey and ask our customers, “What would you like the kiosk to hear, see, and do?” Because it really is about that.

It’s about having a virtual assistant that you’re creating, and what do you want the inputs to be, beyond the functional use case? And I think when customers think that way, they think more broadly. And you see some really interesting use cases, and those manifesting into great experiences.

Kenton Williston: Perfect. And, David, I’ll give you the final word. What final thought would you like to leave with our audience?

David Frei: One thought, or one learning that I had during last year, testing all these very innovative approaches, is how valuable it can be to approach digitalization by starting from the beginning. So it’s very obvious that we are operating in a really incredible, interesting, and fast-changing world, and exciting innovations are appearing, probably every day.

And there’s a great temptation for customers to start digitalization, and having the desire to basically do everything at once. So I’m a big fan of a more conservative digitalization approach, where I see real value in simplicity by step by step—starting a very solid fundamental digital journey with proven components that already have proven, again, the return of investment, where you have comparable low risk.

Kenton Williston: Wonderful. Well, with that, I’d like to thank all of our guests for joining us. So, David, thank you so much for sharing your time and thoughts with us.

David Frei: Thank you.

Kenton Williston: Stephen, likewise. Thanks for joining us.

Stephen Borg: Yeah, thank you for having me.

Kenton Williston: And, Dylan, you as well, of course.

Dylan Waddle: Thank you for having me. I enjoyed it.

Kenton Williston: And thanks to our listeners for joining us. If you enjoyed listening, please support us by subscribing and rating us on your favorite podcast app.

This has been the IoT Chat. We’ll be back next time with more ideas from industry leaders at the forefront of IoT design.

Q&A: Seven Dirty Secrets of IoT

It’s easy for IoT projects to go wrong. Cost overruns, security flaws, or simple lack of user adoption can derail even the best innovations.

So how can you avoid the pitfalls that cause so many IoT projects to fail? To answer this question, Kenton Williston, Editor-in-Chief of insight.tech, spoke with Amol Ajgaonkar, CTO for Intelligent Edge at Insight Enterprises. Insight helps organizations—and the SIs that serve them—accelerate their digital transformation by unlocking the potential of the IoT. Here are the seven secrets Amol has uncovered for creating successful IoT technology.

To hear the full conversation, listen to our podcast Seven Dirty Secrets of IoT.

What is the Intelligent Edge?

Kenton Williston: Your title is CTO for Intelligent Edge. The first thing that comes to mind for me is AI. What does “Intelligent Edge mean” to you?

Amol Ajgaonkar: AI is one use case for the Intelligent Edge—and this is across industries. You consider manufacturing, you take into account retail, energy, healthcare—and you look at all the devices. You look at how people are interacting. All of those entities are generating data. And there is some intelligence in that data that can be taken out and made actionable.

So Intelligent Edge, for me, is processing that data where it is generated, and then correlating that data with other data sets that are also being generated in that same area. And being able to provide actionable insights back to the users so that they can do their jobs better any time there is a repeatable action that needs to be done. And we can also automate that part.

Secret 1: Define Your “Why” and What Success Looks Like

Kenton Williston: I’ve seen studies that say up to 75% of IoT projects fail. Why is that?

Amol Ajgaonkar: The one question that needs to be answered before you do anything, or even touch any device technology, is “Why am I doing this?” If the “why” is defined, then your solution is bound to be a little more successful. Most of the time projects will fail because they have unrealistic expectations from technology. No one has defined the “why.”

The real value of an IoT solution or Intelligent Edge solution is to be able to look at that data holistically.

I think correlation of those data sets, being able to put all of that together, gives a holistic view—not just from one location.

Connecting back to the cloud and being able to collect data from multiple locations, and using the inferences and actions taken by humans, actions taken by machines—and then bringing it all back to the cloud, and then analyzing that data holistically across locations—provides a much richer actionable insight.

And then pushing that expertise back into those locations—so that the local AI models that are running are running smarter, because now they have additional data that has been provided for training, and the model has improved accuracy, or is taking into account variance in their independent and dependent variables.

Secret 2: Think Holistically, Beyond the Edge

Kenton Williston: So how do you get your company to think about IoT projects differently?

Amol Ajgaonkar: It’s a journey, right? There’s not an end state. But define your milestones in that journey. Define what you need to do, and then go step by step—take smaller approaches, carve out a smaller piece. But even for those smaller pieces of the puzzle, think holistically.

Then, look at the physical landscape: How many devices do you need? What type of integrations do you need? What type of data sets do you need? Who’s going to give you the data? Is it a machine? Is it human input? Is it cameras? Is it existing infrastructure that’s already developed? Is it the environment?

Secret 3: Set a Strategy with Clear Milestones

Kenton Williston: How do you execute this sort of strategy?

Amol Ajgaonkar: Once you have the strategy in place and documented, understand which teams need to be involved as well. Not just the teams that are going to work on the pilot, but also the teams that are going to be affected by that solution in the future. Because you need to have the right buy-in from those groups as well. Otherwise they will not adopt it.

If they see this solution as something that s going to add more work for them, they are going to resist. Humans resist change. So if you bring them on board earlier and understand their pain points, understand how a certain solution is going to affect their day-to-day job and if it’s going to make them successful—they are going to provide more data to you.

I think I really focus on the people aspect of the solution. Because if the people who are affected by the solution don’t really think that it’s going to add value to them, they’re not going to use it. And if they don’t use it, it’s of no use, right? It’s waste of money. So make sure those people are on board.

Once you have them on board and have helped them understand how this is going to make their lives easier, they will adopt. And they will ask for it, and they will give you feedback on what needs to change. And that is how that system will transform.

Secret 4: Build for Scale and Identify Your Resources

Kenton Williston: It sounds like a big part of success is having and end-to-end strategy. What does that mean for you?

Amol Ajgaonkar: If you want to take it to production, you have to think about scale. You have to think about security. You have to plan—where will you procure those devices from? Who will image those devices? Because you need each device that’s coming out and being deployed to be the same, so you get consistency.

Security is a big component. And so always think about security across all the efforts—whether it’s the hardware or the software stack. And look for frameworks that are already established and have been tested, rather than trying to build security frameworks from scratch.

And this is just all pre-production. Once you have the procurement, once you have the installation, once you have the imaging, and you have your security strategy in place—then comes deployment.

The first time you’re going to deploy, you try and deploy to one or two locations. As part of deployment you have to plan for what kind of effort is required in deploying that kind of solution. Is it cameras that you have to go and install? If so, do you have the wiring in place? Do you have the electrical in place? The networking in place? Is your networking infrastructure capable of handling the additional load? Will that affect any other existing systems that are already in place?

All of these things have to be planned before you start deploying to production. Going back to why projects fail—if some of these things are missed in planning, when they actually deploy then they realize, “Oh, for this, I needed to upgrade my network.” Or, “I don’t know who’s going to install the cameras, or who’s going to integrate into the PLCs.”

And when you get into production, there should be even more focus on security: “If I were to ship this out to 10 locations, who is the person that’s going to install it? Where will it get installed?” All of those questions need to be answered.

Secret 5: Plan for Management and Maintenance

Kenton Williston: So you’ve got your solution, it’s working, it’s successful—then what?

Amol Ajgaonkar: Let’s say we have all of it planned, and we’ve deployed now to one or two locations. Then comes the question: who’s going to manage these? Once it goes out of your facility and the solution is in production and it’s at the location—well, it’s on its own.

And if something were to change and you need to update your software stack—let’s say you’ve got containers running and you need to update those containers; how are you going to do that at scale? And do you have the right teams in place to support a solution like that? Or should you rely on partners to come in and help you support the manageability of those devices.

So, management and support—or monitoring after the fact—is also super important for a successful solution. All in all, it does seem complicated. It does seem like, “Oh my God, there’s so much to do to make this successful.” But if you rely on partners, and if you have a good plan in place, it’s actually not that hard.

It is just like any other project—where if you plan and do it right, and take into consideration all of these aspects, the solution will definitely succeed.

Secret 6: It’s a Journey, not a Destination

Kenton Williston: How do you communicate a new approach to colleagues? And how can Insight Enterprises help you execute?

Amol Ajgaonkar: First, be okay with ambiguity, because nobody has all the answers. It’s fine, because the problems that businesses face never come with a manual on how to solve them. It’s always a new or newer problem.  But as long as you define why you’re doing what you’re doing, everything else will fall into place in due time.

Secret 7: Leverage Existing Infrastructure

Kenton Williston: From a practical point of view, where should designs start—in other words, how can you make best use of what you already have?

Amol Ajgaonkar: It all comes down to two things, in my mind. One is cost. Nobody wants to spend money building a solution from scratch. That’s why the point solutions or off-the-shelf solutions make sense, because you can just go and buy and test, and you don’t have to spend so much time and money in building a solution. Makes complete sense. When it’s a brownfield situation, where there might already be certain solutions deployed, we also work with those solutions and integrate them.

We don’t always have to build everything from scratch. We rely on our partners a lot, and bring their solutions in to provide that big-picture, holistic solution back to our customer. We work with Intel a lot—and not just on the hardware side, but also on the software side.

Using the frameworks that Intel already has, like OpenVINO or OpenAMP, and looking at how they’re designed—it really helps leverage whatever infrastructure that the customer already has. Which is great, because cost is a big factor in building the solutions.

If the customer says, “You know what? I’ve got these Intel-based servers, or these smaller devices that I already have in my facility. Can you reuse those?” And if the answer is “yes” it’s amazing, because I’ve just saved my customer a ton of money. They don’t have to spend money in buying new hardware at that point

It might feel like it’s a lot of effort, but truly, with the right partners in place it makes that solution easy to build, deploy, and see the value of. Maybe it’s just that I’m passionate about the Edge and solutions at the Edge, but I feel there is a huge value for our customers in building solutions at the Edge, and then managing these solutions or these workloads through the cloud for scale.

It’s not all hype. There is some real value in the solutions. It’s just a matter of realizing where that value is.

Q&A: Collaboration Technology and the Future of Work

The COVID-19 pandemic changed the way we work—and where we work—and those changes will be felt far into the future. As people begin to head back to the office, enterprises face crucial challenges in reimagining their workplaces for this new reality.

How can the needs of remote and on-site colleagues be balanced? How can employers accommodate new work styles like hot desking? How do unified communications systems fit into the picture?

To answer these questions, Kenton Williston, Editor-in-Chief of insight.tech, spoke with Andrew Gross, VP for Enterprise Sales at Crestron, a global leader in audiovisual technology. We were joined by Wei Oania, General Manager of Education and Collaboration for the Intel IoT Group, who shared her insights on the future of the workplace.

To hear the full conversation, listen to our podcast Crestron’s Vision for Collaboration Technology in 2021.

How Will Digital Transformation Affect the Workplace in 2021 and Beyond?

Kenton Williston: Andrew, can you tell me about your role at Crestron?

Andrew Gross: I lead our sales and technical decision makers around the globe in being evangelists and advisors in what the future workplace looks like, and how technology can help.

I like to say that we do is we make enterprises, meeting rooms, and offices smarter and more connected. The most recent focus has been on becoming one of the world leaders in Teams and in Zoom rooms, which are making their way to everybody’s home office and their office-office.

Kenton Williston: Wei, what is your role at Intel?

Wei Oania: Our team globally focuses on accelerating technologies, the inter-schools and inter-enterprise offices. Our goal is to improve and positively impact the way we learn and the way we work.

Very similar to Andrew, but our group focuses more on the enterprise side. We want to make sure that we set up a similar technology and similar usage in schools and campuses, as well as in enterprise offices.

Kenton Williston: This last year forced an acceleration of digital transformation and the way we work. How will this affect the nature of the workplace in 2021 and beyond?

Andrew Gross: COVID really was the catalyst to what we’ve all been calling digital transformation. The difference today is that we’re actually doing it. And I think that’s the really exciting part—we’re not just talking about it.

A great example of what it means to actually do digital transformation—or adopt or embrace digital transformation—is the enablement of offices and homes, and really all workers, to be connected and integrated all the time.

I think a great example of that is in the office space of just two years ago. If you wanted to meet with a colleague who was halfway around the world, or in a different office even in your own state, you had to go to a conference room, maybe dial them on your phone. Or you dialed them on your laptop and then connected it into the room to bring them in virtually. The technology was there, but it was adjacent to your daily activities.

Today, the technology is integrated into our lives, or integrated into the spaces. The meeting room itself lives in the cloud, and the technology connects to the virtual meeting room. And everybody has a somewhat democratized meeting experience—whether they’re in the office or remote.

The Emerging Role of IoT Technology

Kenton Williston: Where does IoT technology fits into this new workplace? Are there any new trends?

Wei Oania: IoT technology has already impacted our lives and is making them better in terms of providing very easy and frictionless living. So, from a working perspective, that should be the same.

COVID has just sped up that transformation. We know that the current workforce is mobile, collaborative, and geographically dispersed, and the future workforce is going to be even more so.

With the right tools, with the right technology, we can enable that collaboration much better. It’s thinking about inclusion and belonging—making every worker feel like they can contribute equally. And these are types of things we often don’t think technology is there to do, but it can.

Looking at a conference room today, they’re just regular meeting rooms. But looking forward, they will become collaborative rooms that focus on video with remote annotation and sharing.

Then the next step—we’re looking at smart meeting rooms. How do we insert audio and video enhancement? How do we use analytics and data insight? Can we do some transcription automatically? All of this on top of security and management.

Last, what we really want to achieve is getting all these stages together—immersive meeting rooms that would ultimately offer that frictionless meeting experience for us.

So certainly IoT is playing a vital role in all of this. But I also want to say that it doesn’t have to be overwhelming. Technology is smart enough and modular enough that now we can do a step-by-step approach to ensure that different things can be connected in the time frame you want, and also to make sure they are affordable and accessible.

Enterprise Strategies for Collaboration Technology

Kenton Williston: What best practices should enterprises follow as we enter this new era of collaboration? Do you have any examples of organizations who you are already doing things the right way?

Andrew Gross: A keyword that I know Crestron has always been focused on is “automation.” How is my life enhanced and maybe made easier by the technology around me?

We’ve actually seen two phases of this. The first wave, which was right at the beginning of the pandemic, was that those who were in the office were seen as first-class meeting participants because they had better technology—they were heard and seen better. Those at home were seen as second-class participants, where they were using their own laptop, audio, or video—they weren’t really seen as being connected to the meeting.

That started to change near the end of 2020, as we became work-at-home experts, or hybrid-work experts. And what happened was that a lot of technology made its way into the home to enable a better meeting experience.

What we’ve seen now is that the at-home worker is starting to be seen more and more as the first-class participant, and the in-office worker is seen as the second-class participant.

The only way to bring that into balance—whether you’re in the office or at home—is through automation—video and audio technology that’s not just good enough, but truly enterprise grade, regardless of where you’re taking the meeting. That’s what the greatest enterprises that have deployed this today are really doing a great job of—ensuring that their employees are engaged, regardless of where they’re meeting with their colleagues.

A New Role for Unified Communications

Kenton Williston: Before 2020, platforms like Zoom and Teams were a secondary mode of communication used for specific purposes. Now they are the default—and these platforms are evolving to become more of the basis of unified communications platforms. What makes for a successful deployment of a unified communications platform?

Andrew Gross: I’d say that you need to break down the deployment of unified communications into two main aspects: First is the software. What is my standard platform? What am I rolling out as my majority share for my enterprise to meet and collaborate over? That can be chat, file share, video content, audio meetings.

Then I make the hardware decision, and the hardware decision gets into probably a larger decision-making process. I think there are four key things when you’re looking at a hardware platform to support the software standard that you’ve deployed—automation, intelligence, awareness, and management.

Automation: How do I now take that software deployment with my hardware system in my meeting spaces and automate it? Reduce touch. How do I make it simpler for my teams and my colleagues to join meetings?

Intelligence: How are we making our rooms smarter? One of the great pieces of technology that we’ve integrated with Intel is the ability for our cameras to actually count people in a space.

Awareness is giving data and information to employees across the office—which spaces are booked, which spaces are available. Of course now it’s which spaces are clean for me to use?

And the last one is management. I think we’ve all agreed that more technology is certainly a big part of the answer here—to enable workers in this hybrid-working format. But more technology means more management. This technology is valuable only if it’s actually working.

Preparing for the Return to Work

Kenton Williston: How should enterprises update their infrastructure to support these emerging use cases?

Wei Oania: From a deployment point of view, from an enterprise-setup point of view, it’s about what would you like to offer your employees? And how is that going to work with the integration of other ingredients? But the important thing is that you have to deploy enough infrastructure to enable what’s coming in the future.

At Intel, we work with our co-travelers to see what the compute needs are on the cloud side? What are the compute needs you would have on the Edge? How much workload would it take to run AI? To run 5G? To run different types of emerging use cases that are coming along?

And how do you set that up in a way that allows companies to have flexibility at any time that they would like to insert additional compute, additional hardware? Offering a framework that could sustain those changes, and always with one thing in mind—making sure the technology can be easily adjusted and is very adaptable to changes.

Kenton Williston: Andrew, what are your customers doing to create workplaces that are more flexible, comfortable, safe—and inviting?

Andrew Gross: People are certainly no longer novices in the world of Teams and Zoom. And so if your room system does not support that type of meeting technology, then you’re not giving your workers a purpose to return, right?

It’s about driving that purpose: Why am I coming back to the office?

And if I come back to the office, the technology had better be there to support the efficiency that I had at home. So equipping the conference rooms with the right technology for the collaboration platforms that they’re familiar with and have enabled is key.

The other one is certainly about the sense of security, a sense of health and safety. A really great example of what a lot of big technology companies—or really any enterprise—are doing with Crestron and Intel technology is leveraging digital signage on the outside of conference rooms.

Before, digital signage was used for advertising or company updates. But now digital signage is becoming ubiquitous across every single meeting room, and it’s displaying a lot more than just a room calendar. Meeting-room calendars are now actually showcasing cleaning schedules; they’re showcasing room-capacity limits.

These are things that we never thought of before, but that are easy to do on Crestron panels. And all of the intelligence from those smart Intel chips built into the Crestron technology inside the room is communicating back to those panels on the outside.

Taking Collaboration to The Next Level

Kenton Williston: What are the next steps enterprises should take?

Wei Oania: I think one thing we have to acknowledge is that the future workforce will be different.

We have to start embracing the flexibility of mobile work, the flexibility of different types of working environments. And ensuring that not only are we competitive as employers but that we also adjust to just what this generation of people is requiring.

That experience cannot be first-class/second-class. It has to be equal, has to be inclusive, has to provide those needs that all of us have regardless of where we are. Those are the things that as a technology company we’re looking at.

We can continue to learn collaboratively. Things will change, and things will always surprise us. But if we have some basic needs in mind, we can be creative when challenges come along, and quickly retrofit whatever we have to meet the requirements of the time.

Andrew Gross: You can’t wait for the return to the office to do everything we just talked about. The strategy for understanding how to enable hybrid workers, in-office workers, at-home workers, and to democratize the experience between those levels of workers—it has to be started now.

And that’s what Crestron is doing now—we’re working with our partners, like Intel, and speaking to customers as early as possible. It’s about being an advisor in the Future of Work, and advising customers as early as possible; developing a strategy around those platforms, and ensuring that it’s defined now, and deployed and installed and ready for that massive return back to the office.

Built-in Security Unlocks 5G

Radio access networks (RANs) are the nervous system of today’s wireless communications and have been since the inception of cellular service. Throughout their history, a select few network equipment providers (NEPs) like Ericsson, Nokia, and Cisco have built the entire solution stack for these network onramps, from routing equipment to security appliances.

Security technologies in particular have been tightly integrated with the RAN architecture, consisting of solutions like proprietary countermeasures built into networking equipment, security gateways, and firewalls. This security has proven more or less sufficient in RAN deployments to date, but as networks demand more performance and flexibility through multi-vendor ecosystems like OpenRAN, these practices will have to change.

From a pure business perspective, a primary driver of this change is the proprietary nature of RAN equipment. Unfortunately, the limited vendor ecosystem has resulted in higher costs than a more open marketplace, and little interoperability among RAN vendors, whose single-source solutions keep margins high.

5G and OpenRAN

One of the goals of future RAN development is to tackle these challenges by maximizing the use of common, off-the-shelf hardware. As edge networks transition to the increased throughput and latency of 5G, initiatives like OpenRAN offer a new, software-driven approach to RAN deployment that can improve network flexibility, interoperability, and lower cost.

There are multiple versions of OpenRAN—including one specifically for the 5G New Radio (5G NR) project—but the one most relevant to this discussion is a broad initiative managed by the Telecom Infra Project (Figure 1). This flavor of OpenRAN seeks to “define and build 2G, 3G, 4G, and 5G RAN solutions based on general-purpose vendor-neutral hardware, open interfaces and software,” which applies to all sorts of networking equipment.

OpenRAN network diagram and scenarios
Figure 1. The OpenRAN initiative distributes open, general-purpose networking hardware across radio access networks. (Source: Telecom Infra Project)

It would appear that OpenRAN is poised to deliver for the edge what SDN and NFV did for the cloud/data center, so long as the networks based on it can be sufficiently secured across solutions from multiple vendors.

Initiatives like #OpenRAN offer a new, software-driven approach to RAN deployment that can improve network flexibility, interoperability, and lower costs. @axiomtek via @insightdottech

Open, Yet Secure

Disaggregating network infrastructure and opening the traditional RAN model does raise concerns about how these deployments will be protected. Compared to the past, security on OpenRAN networks will have to consist of a chain of trust that extends across equipment supplied by multiple vendors. In many cases, this includes hardware from different suppliers at the same cell site or base station.

Platforms like the 3rd generation Intel® Xeon® Scalable processors (previously called Ice Lake SP) integrate a battery of features that protect open networks from uncertainty. These include the ability to identify whether other network entities can be trusted, controlling where data and workloads can be safely deployed on RAN infrastructure, and guarding against advanced malware.

The processor achieves this through a multilayered security stack that reaches from the silicon out to the application layer and onto the network itself:

  • Intel® Total Memory Encryption (Intel® TME) protects the physical memory of the device—including any data stored in the memory, such as platform firmware and software-provisioned security keys.
  • Intel® Platform Firmware Resilience (Intel® PFR) leverages the integrated Intel® MAX 10 FPGA technology to monitor system buses for malicious traffic and verify the integrity of firmware before execution.
  • Intel® Software Guard Extensions (Intel® SGX) use hardware-assisted confidentiality and integrity mechanisms to partition application code and data into secure memory enclaves of up to 1 TB. Once there, even higher-privilege processes or an untrusted OS cannot access or modify it.
  • Novel techniques like Multi-Buffer and Function Stitching combine with other hardware and software innovations on 3rd gen Intel Xeon Scalable processors to improve cryptographic algorithm execution performance by as much as 8X over the previous-generation microarchitecture.

Revolutionizing RANs with a Secure Foundation

All of these measures are available in the NA870 Rackmount Network Appliance Platform (Figure 2) from Axiomtek, an IPC and embedded systems design and manufacturing company. The appliance is based on dual 3rd gen Xeon Scalable Processors with up to 40 CPU cores that incorporate all of the security mechanisms mentioned above. The systems also include a trusted platform module 2.0 security chip to further extend the integrity provided by the Intel security technologies.

Axiomtek NA870 2U rackmount network appliance
Figure 2. The Axiomtek NA870 2U rackmount network appliance. (Source: Axiomtek)

The OpenRAN-ready NA870 integrates up to 66 LAN ports via 8 LAN expansion modules, including 100 GbE networking cards, as well as two PCIe gen 4.0 x16 expansion slots that provide roughly 256 GT/s of throughput each.

These interface options, combined with the performance and virtualization capabilities of the two onboard Xeon processors, allows the NA870 to accommodate many different traffic types flowing across edge RANs simultaneously and securely.

And it accomplishes this without using any proprietary or single-source technology.

Opening New Ecosystems Through Security

As we move away from monolithic networks and toward the kind of market competition that has previously been missing in this space, 3rd gen Xeon processors offer the flexibility to meet the requirements of a new ecosystem of OpenRAN vendors. And they come with built-in integrity and trust technologies that tie the ecosystem together.

Vendors like Axiomtek are now capitalizing on this foundation to support a new generation of edge access networks that are more affordable, more efficient, and more secure.

Unified Retail Device Management Increases Uptime

As digital transformation sweeps the industry, every retailer is on a quest to gain operational efficiency and lower costs. But that’s easier said than done as they grapple with a multitude of device types, from point of sale to digital displays, and beyond. And then consider the immense scale on which a large retailer operates, with hundreds of devices in each store across hundreds or even thousands of locations.

So, what if store operations could run as smoothly as a car? Today’s automobiles alert us when tire pressure is low, when an oil change is needed, when it’s time for new brake pads. Under the hood, sensors and computers are hard at work—monitoring, managing, and fine-tuning—all while presenting information to avoid a breakdown when the driver needs it most.

“We were inspired by this analogy,” says James Patterson, Director of Marketing and Strategic Accounts at Box Technologies, a retail customer engagement solutions provider and division of Flytech. “Everyone is looking at how they can save money on the bottom line and increase device uptime. And ultimately that’s the reason why we decided to go on the journey with Inefi Spotlight.”

#Retailers using Inefi Spotlight report they can reduce #engineering calls to retail stores by up to 35% annually. @BoxTechnologies via @insightdottech

Unified Retail Device Management

The Box solution eases retail operational challenges with a device-agnostic platform. Its remote monitoring and maintenance functionality provides system-wide visibility through one unified management system. This allows retailers to replace their traditional service model with one that is centralized, automated, and predictive—as shown in Video 1.

Video 1. A unified system for predictive retail device monitoring and maintenance. (Source: Box Technologies)

For remote monitoring, the Inefi agent is the go-between from mission-critical devices, their associated peripherals, and a cloud-based intelligent management system. This allows for endpoint problem-solving—even before an issue might occur.

“With our solution, sales associates whose job is serving customers don’t end up troubleshooting problems,” Patterson explains. “And IT staff don’t have to rely on someone stumbling upon an issue and reporting it after the fact.”

Once a problem is identified, it can often be resolved with an automated fix. Retailers using Inefi Spotlight report they can reduce engineering calls to retail stores by up to 35% annually—mitigating costs that can arise from service calls and downtime.

Data-Led Insights

The cost savings can be impressive. As one example, Box analyzed the data for one of the largest retailers in the U.K., with more than 20,000 locations and hundreds of thousands of devices. In the period of one year, 20% of its field service calls were all down to one specific type of printer.

Rather than triaging the issue, the service provider would send an engineer to swap out the printer, fix it, and return it to equipment inventory—resulting in excessive capital outlay and wasted staff time. “Our customer could never get to the root cause of the problem,” says Patterson. “So there was never any data-led insight to what was causing the problem.”

The Inefi solution was able to collect functional data from the printer at a granular level, identified the root cause of the problem, and remotely implemented a remedy. Eliminating this one printer issue would bring the retailer’s field service demands down to a far more manageable level—into single-digit percentages.

BI Maximizes Efficiency

Business intelligence can save immense amounts of capital by improving the ability to manage how devices are put to use across an entire store network. In one common situation, a retailer might find that three out of six PoS terminals are used most frequently throughout the day, while the others stand idle.

Because the Inefi platform centrally monitors and analyzes device usage, in this case, store managers can spread out the use of all terminals, which increases equipment wear and tear.

The platform uses Intel® Active Management Technology (Intel® AMT) and Intel vPro® software for its remote management capabilities—from troubleshooting to updating device software. It can even improve energy efficiency by remotely managing endpoint power states.

And with out-of-band support, even if the OS stops working, remote monitoring and troubleshooting continue without interruption.

Running Retail Ops on All Cylinders

The benefits of predictive and automated device management will continue to expand as edge-to-cloud AI technologies are more pervasively deployed. Data trends over time could identify, for example, that panel PCs are likely to fail once they reach a given number of transactions.

With the ability to plan equipment maintenance cycles, retailers are better able to achieve smooth-running operations—just as we can with our automobiles. Patterson concludes, “If we can deliver what we experience when we step into our cars each day, through connected, predictive platforms, then that’s our mission and ultimately that is the future.”

Visibility Pilots Supply Chain Automation

If you want to understand the importance of an efficient supply chain, two words sum it up: toilet paper. At the beginning of the pandemic, panicked consumers stockpiled the bathroom staple, creating shortages. With limited visibility, retailers couldn’t predict when inventory would be replenished, and “where to buy toilet paper” became one of the top Google searches of 2020.

COVID-19 intensified inefficiencies in the industry that has been slow to digitize operations. While consumer demand and expectations created added pressure, the biggest problem is that the supply chain—typically considered a back-office operation—was running on siloed platforms.

“The traditional supply chain has multiple global parties and processes that rely on manual documentation and communication,” says Chris Cutshaw, director of commercial and product strategy for C.H. Robinson, a global logistics solutions provider. “Each party has its own system, and they’re not talking to each other very well. This results in companies reacting to problems when they happen, instead of anticipating and avoiding them.”

AI Transforms the Process

Fortunately, the siloes are coming down as organizations accelerate end-to-end supply chain automation. The combination of AI, IoT sensors, and cloud technology take visibility to the next level, collecting and analyzing “in transport” data that can forecast and automate responses in real time.

For example, Navisphere Vision combines real-time orders and shipments with updates on transportation and external factors like weather that can disrupt the supply chain. Delivered as a software as a service (SaaS), the solution uses the Intel® Connected Logistics Platform—with sensors and gateways placed in containers—to integrate data points and glean insights.

AI-driven software analyzes and collects information such as GPS location, shock and vibration, temperature, and humidity at macro intervals, and transmits the data through the integrated Microsoft Azure IoT Central cloud platform. As items go from point A to point B, machine learning algorithms compare this data to the shipper’s original plan, predicting disruption based on specified criteria.

Customers can track shipments in real time via the Navisphere Vision platform. Before deploying the solution, a C.H. Robinson team performs a gap analysis with customers to identify the company’s current operations. “We offer configurable solutions, whether it be our technology or just overall process change that can help them get to best-in-class supply chain management,” says Cutshaw.

The future is a self-healing, self-realizing supply chain that can make decisions based on algorithms and AI models to achieve an outcome that a human doesn’t have to drive. @CHRobinson

Visibility for Launch Logistics

Launching new products demands precise orchestration of inventory as it travels from the factory to the customer warehouse and everywhere in between. Microsoft uses Navisphere Vision to manage distribution of its devices, including Xbox game consoles. Predictability on deliveries is essential, especially during the much-anticipated Xbox Series X and Xbox Series S launches, in the middle of the pandemic. After building anticipation for a new product, it’s costly when demand can’t be met as planned.

“Microsoft needs to know with certainty where their containers are at any given time, anywhere in the world,” says Cutshaw. “They also need to know exactly when each container is going to arrive to its destination, because that’s inventory they can plan their demand against.”

With an accurate ETA, a company can constrain the amount of inventory in its inbound channel, reducing costs, improving margin, and meeting customer demand.

While deadlines are key, damage in the supply chain can also impact a launch. Navisphere Vision monitors Xbox and Surface device shipments for shock abrasion, such as a tilt or shift in a pallet that may cause the product to become damaged. And the packages are tracked to avoid theft. If a container is opened before reaching its destination, the solution records and identifies where it happened and alerts the proper stakeholders so they can take immediate action.

Predictive analytics can also solve problems as they’re happening. With congestion due to a flood of products from Asian factories, COVID-19-related worker shortages, and a lack of warehouse space, ports around the world are facing a logjam of container ships.

A company with a time-specific shipment may need to enact a different plan. Based on preset thresholds, the system could automatically create an air shipment, process, pick, pack, and ship without customers ever realizing there could have been a problem.

“Right now, someone would need to submit an order, contact the manufacturer, plan an air shipment, contact a carrier, schedule an appointment, and schedule a delivery,” says Cutshaw. “Eventually, all of that is going to happen based on the information that’s coming from IoT sensors and other sources of information into a central system that can plan and optimize.”

The Future of Supply Chain Management

While the supply chain could never have planned for COVID, it’s served as a catalyst for accelerated digital transformation of the industry.

Companies need to be ready for the next disruption. Investing in complete supply chain orchestration is a multi-year journey. The industry is in the very early stages right now, and the end goal is to automate action based on supply and demand and disruptions as they occur.

“The future is a self-healing, self-realizing supply chain that can make decisions based on algorithms and AI models to achieve an outcome that a human doesn’t have to drive,” says Cutshaw. “It’s very visionary work, and we are looking forward to being part of that journey for companies.”

Free-to-Install EV Charging Stations Power Up Retail Sales

Retailers will do just about anything to engage customers, but sometimes sticking a giant inflatable gorilla atop their building just doesn’t cut it. That’s when they look for technological solutions—and electric vehicle (EV) charging stations are a great way to pull customers in. After all, drivers might as well do a bit of shopping while they wait for their batteries to fill.

But there’s a catch: EV charging stations can be expensive to install. That’s why start-up EOS Linx decided to take a new approach by combining an EV charging station with a customizable digital display. The result? A free-to-install charger that funds itself through advertising.

A New Citizen in the Smart City

The business model of EOS Linx’s free-to-install digital out-of-home (DOOH) advertising and EV charging station appeals to a wide variety of public and private entities, including municipalities, retailers, hotels, and utilities.

Recently, the Atlanta Retailers Association—a group of nearly 1,000 independent stores—entered into an agreement to deploy these new-generation EV charging stations throughout the Atlanta area. Also in place is an affiliation with the EV Make-Ready Program, providing communities with incentives for development of EV infrastructure and equipment in conjunction with local utilities.

“Aligning with overarching strategies accommodates balance in the power grid,” notes Jeff Hutchins, Chief Information Officer at EOS Linx. “By working with utilities and associations, the company is ensuring that this platform is part of those predetermined and validated integrated solutions.”

Flexible Platform Architecture

The EOS Linx EV Charger is an excellent example of smart-city technology at its finest. It begins with a platform offering flexible infrastructure that allows operators to layer additional services and features in a modular way, one of the major tenets of smart-city design.

The underlying digital-signage platform for the EOS Linx EV Charging Station is LG-MRI BoldVu® Smart Point. This “data center on the sidewalk” allows retailers to optimize their investment with advertising messages.

Normally it’s challenging to collect metrics in #DOOH #advertising, but the EOS Linx #EV charging station and display enables brands to see who’s looking at the screen and for how long” via @insightdottech

The solution also relies on Intel® technology, such as the Intel® OpenVINO Toolkit, which contributes to easy deployment in the market. “We’re able to use Intel’s reference architectures rather than reinventing the wheel,” Hutchins says.

Stores can use the free EV charger to help drive foot traffic, with the option to add applications over time. For example, they can choose to add advanced AI-powered security features that vary according to property owners’ needs (e.g., the ability to deter loiterers) and are designed to comply with regional guidelines.

“Having a converged infrastructure platform allows for multiple starting points to catalyze deployment, and then the municipality or customer is able to build on that baseline platform architecture,” explains Hutchins.

Plus, the platform offers connectivity through 5G mesh technology, which EOS Linx expects to be an important telecom service in the near future.

“The EOS Linx EV Charging Station offers a practical approach for retailers because they receive the benefit of a host of attractive technologies in one package that provide necessary and useful services to their clientele,” says Hutchins.

A Turnkey Solution

When a retailer requests an EV charging station, the EOS Linx recon team will conduct a site visit to ensure there’s ample sunshine and good visibility. Then the team will work with the city for approval before beginning the installation process.

By optimizing the power of edge computing and converged network connectivity, EOS Linx is working across the DOOH and smart-city IoT ecosystem. “In this way, EOS Linx’s EV Charging Station provides different revenue streams, and we also see that aligning government and private-sector-funded associations and grants on the same endpoint creates incredible momentum,” says Hutchins.

While EOS Linx does not charge for installation or maintenance, it requests a commitment to a partnership term of approximately six years. And while retailers might normally be wary of evolving technology that renders it obsolete, the EOS Linx system is designed to be upgraded as new technologies emerge.

“Location partners have asked what happens if EV charger technology advances and faster charges are available,” says Hutchins. “We ease their concerns by confirming we can swap it out when needed. We anticipate these advancements happening and are effectively positioned to evolve with them and upgrade as needed.”

It’s also advantageous for the advertisers, who not only receive prime real estate to display their message but can also collect important information through a camera that tracks customer demographics. “Normally it’s challenging to collect metrics in DOOH advertising, but the EOS Linx EV charging station and display enables brands to see who’s looking at the screen and for how long,” says Hutchins.

“The bottom line is that everything is plug and play,” says Hutchins. “We position ourselves as a platform and as a service that is poised to meet our partners’ needs as they evolve.”

The next level of EV charging? No-cost deployment, new marketing opportunities, and renewable energy. An unbeatable combination.

Better Outcomes With AI-Based Medical Imaging

The vast majority of today’s healthcare data comes from medical scans, and doctors have become stressed and overburdened as they struggle to interpret the images while managing patient care. By using AI and deep-learning technology to analyze patient scans, doctors can obtain results much faster while also improving diagnostic accuracy.

Scans are not as easy to decipher as they may appear. Many contain dozens of images that doctors must pore over to arrive at a diagnosis. Pinpointing the exact location and dimensions of fractures, nodules, and other lesions is often difficult.

When AI algorithms analyze scans, they can quickly point doctors to the location of a fracture or nodule, saving time (Figure 1). “We call it AI-assisted diagnosis,” says Xiangfei Chai, founder and CEO of HY Medical, a Beijing-based company that develops AI-based imaging solutions. “Doctors still make the decision, but they can do it two to three times faster than they can with traditional scanning.”

Xray image of hand that shows bone fracture location

Figure 1. AI algorithms accurately identify the location of a bone fracture. (Source: HY Medical)

AI-based scanning analyzes patient image data to uncover characteristics that can’t be seen by the human eye. Both these features help doctors improve the accuracy of their diagnoses. “Our AI-Assisted Diagnosis of Medical Imaging Solution can increase accuracy up to 15%,” Chai says.

 “Our #AI-Assisted Diagnosis of #Medical Imaging Solution can increase diagnosis accuracy up to 15%.” —Xiangfei Chai, CEO of HY Medical via @insightdottech

 AI Medical Imaging for Surgical Decisions

The HY solution has been working with more than 10 diseases so far, such as fractures, aortic dissection, aortic abdominal aneurism, and some cancers. The experience of one of Beijing’s largest hospitals shows how its precise calculations can make a difference.

To treat aortic dissection—a tear in the inner layer of the large blood vessel coming from the heart—doctors often use a stent. Stents come in several sizes, and the blood vessel’s “U” shape makes measurement of the tear’s dimensions difficult.

At the Beijing hospital, an HY Medical study found that almost half of stent surgeries failed the first time, and nearly 20% of the time, surgeons used a wrong-size stent. After the hospital adopted HY’s AI-based solution, stent selection accuracy increased by 50%. The solution also provided results within 10 minutes, decreasing waiting time for doctors and patients.

AI-Enabled Medical Imaging as a Disease Management Tool

AI can also help doctors segregate patients according to disease severity. That capability was helpful when the COVID-19 pandemic broke out in China.

Initially, there was a shortage of tests for the virus. In addition, early tests generated many false positives—at a time when providers were overwhelmed. As a result, some doctors switched to doing CT scans instead. Machines enhanced with the HY solution’s AI capabilities not only accurately diagnosed the disease, but revealed how sick people were, allowing hospitals to effectively triage the deluge of patients.

Later, as lab tests became more accessible and reliable, doctors used AI scanning to track the progress of patients’ infections.

“AI automatically calculates the size of a lesion and how fast it is shrinking or growing,” Chai says. “It also predicts the course of development based on the lesion’s rate of change, allowing doctors to provide personalized treatment.”

When the pandemic spread beyond China, it hit the U.K. particularly hard. HY worked closely with scientists at Coventry University to quickly develop an AI-enabled scanning solution tailored to the specific needs of U.K. doctors. They put the solution to work in the university’s affiliated hospital, where it was downloaded within 24 hours, giving physicians the ability to tend to the most severe cases right away.

“There are so many uncertainties during the treatment of a COVID-19 patient, such as what treatment should be applied or when the patient should be sent to the ICU,” Chai says. “With AI technology, better decisions can be made about assessing which patients really needs beds and these who can safely go home.”

Expanding to Treatment

As information about AI-based imaging’s efficiency and accuracy spreads, more providers are incorporating the technology.

To give clinicians flexibility, HY provides three ways to deploy its solution: Doctors can upload scanned images to a cloud-based app, large hospitals can connect the solution to their internal imaging networks, and smaller hospitals with a single X-ray machine or other scanning device can load the HY appliance directly onto the machine. All calculations are done by Intel® Xeon® Scalable processor-based systems.

As more hospitals incorporate AI-enabled imaging, it is expanding to new use cases in surgery, radiation oncology, and chemotherapy.

To help providers build these applications, HY’s solution uses the Intel® OpenVINO Toolkit, which allows developers to easily transport code from one application to another and tweak it to accommodate particular needs. “You just do a little bit of setup—you don’t need to change the algorithm much to adapt,” Chai says.

The more hospitals use AI algorithms, the more that data iterates, improving accuracy. And as AI evolves, it will increase in use to predict treatment outcomes.

For example, HY is working on an imaging solution that will judge whether individual breast cancer patients are likely to respond well or poorly to various forms of chemotherapy. It is also developing predictive algorithms for cancer and artery disease.

“Worldwide, there are many groups working on predictive solutions and other new applications,” Chai says. “In the future, AI will have a much bigger influence on medicine.”

The Digitalization of QA in Manufacturing

Sponsored by Relimetrics

[player]

A majority of manufacturers still rely on manual inspections for sorting, tracking and inspecting items. But these efforts are prone to error, resulting in high rework, downtime and unsatisfied customers.

In addition, recent disruptions in the supply chain such as COVID-19 has companies looking for ways to automate QA. In this podcast featuring Kemal Levi, Founder and CEO of Relimetrics, we discuss the changing demand for QA automation and the use of AI to keep up with those demands. Relimetrics helps companies cut costs and improve productivity by automating inspections and using digitized quality data.

Just us as we discuss:

  • The challenges facing QA manufacturing teams today
  • How AI is being implemented to address the changing landscape
  • How the data capture from QA automation can be used to improve efficiency
  • What the future of QA automation looks like and how to prepare
Apple Podcasts  Spotify  Google Podcasts  

Transcript

Kemal Levi: The overall goal in QA automation is the full digitalization of QA. The issue at the moment is that existing quality automation systems are insufficient.

Kenton Williston: That was Kemal Levi, the founder and CEO of Relimetrics. And I’m Kenton Williston, the editor-in-chief of insight dot tech. Every episode on the IoT Chat, I talk to industry experts about the issues that matter most to consultants, systems integrators, and end users. Today I’m talking to Kemal about ways AI is transforming quality control, and how companies can move to fully-automated quality assurance. So, Kemal, I’d like to welcome you to the show.

Kemal Levi: Thank you.

Kenton Williston: Can you tell me about Relimetrics and your role there?

Kemal Levi: Absolutely. My name is Kemal Levi. I’m the Founder and CEO for Relimetrics. At Relimetrics we are enabling the fourth industrial revolution by making quality inspections easier, smarter, and insightful. We enable our customers to digitize visual inspections using AI-based mission vision, and they can do this without writing a single line of code, and they can use this data together with a machine and process data to close the loop in their manufacturing and improve their overall process efficiency.

Kenton Williston: Excellent. On that point, what do you see as being the key challenges? And I guess really maybe the hidden question here is not just what do you see as challenges to quality assurance, but what prompted you to start this company, and what are the problems you’re trying to address?

Kemal Levi: Well, we started our journey as a systems integrator, and we actually did a lot of shop floor work with our customers, helping them implement smart camera technology on their shop floors. We had the chance to really see recurring pain points on the shop floor, particularly related to basically camera-detection accuracy. Today the detection accuracy attained with smart cameras is not sufficient for our customers to really be able to fully digitize their lines. They have a lot of concerns in digitizing their quality assurance, so they need to have quality operators in these lines to weed out false detections. Last week was the thing—the automotive OEM, the quality manager there just made a joke saying that, before it used to be just humans, now it’s humans plus smart cameras. So, to such an ecosystem we basically brought Relimetrics technology that really paves the way for a full digitization of quality assurance in a production floor.

Kenton Williston: So that’s really interesting. And I’m wondering, based on what I’m hearing, it sounds like there’s some long-standing challenges that you’re looking to address. And I’m wondering if there’s anything new in this arena, and ways that today’s challenges differ from the past?

Kemal Levi: With increased customization experienced, what we see is that neither existing QA staff, nor QA solutions are able to keep up with high production variability on the shop floor. So, as I just mentioned, the detection of defects is not sufficiently accurate, and the process control is not optimized. So this is leading to high rework and scrap costs, downtime, and also customer complaints. And, furthermore, manual crane intervention is required to weed out false detections. As a result, like today, manufacturers are actually still relying on human inspections. This is mainly due to actually the poor performance of existing quality automation systems.

So, with basically quality inspections becoming increasingly more complex, what we see is there’s really a push among manufacturers to think about solutions, to help them to digitize their quality assurance processes. What we see is that, today, a large percentage of manufacturers in the USA are still relying on manual inspections. We carried out a study with ABI Research back in November, and what we have basically found out is that only 7% of manufacturers have a fully automated quality assurance process.

Kenton Williston: Got it. So it sounds like there’s a combination of factors here. So, one is the overall effectiveness and accuracy of QA systems—both from the perspective of the existing digital system not performing up to requirements that the manufacturers have, and then of course human beings are obviously fallible; it’s easy for us to miss things. I would say it sounds like there is a cost element, because of course all this labor is pretty costly, and could be better deployed doing things that humans are really good at. And then the third is the flexibility with the custom manufacturing. And presumably the continuing push towards evermore efficient, just-in-time models is also creating a real bottleneck there in the QA process.

Kemal Levi: Exactly. Completely agree with you. This is well stated, well summarized.

Kenton Williston: Excellent. And one thing I’d love to talk about is, of course, the last year has really changed everything about manufacturing. For example, there have just been tremendous disruptions to supply chains, and a great deal of difficulty from the manufacturer perspective of predicting what markets we’re going to need. Like, I just recently bought a house, so I’ve been very keenly interested in the stories about the prices of lumber. And it’s just gone crazy, right? Everybody hoarded it for a while, and the price has shot up; and now it’s shooting down and people are losing tons of money. So it’s become very difficult in this strange year we’ve gone through for manufacturers to know what to do. And so how have you seen this change the demand for QA automation?

Kemal Levi: Over the past few years the demand for software, in general, to automate industrial processes and make them smarter has been in high demand. Today, with COVID challenges, this demand has surged, as companies are looking for ways to protect their employees, and bring as much automation and efficiency to their manufacturing processes to avoid disruptions. So in today’s climate things are improving, but still manufacturing is hit quite a bit—supply chains are disrupted. There’s definitely recovery, but I think we can confidently say that supply chains are still disrupted due to the viral pandemic. So companies are looking for automation solutions that are deployable quickly.

Now inspections by human operators are still a standard in most manufacturing environments. So, as a result, companies are looking at ways to automate those, or have human inspectors operate remotely. So, overall looking at basically the past year, there has really been a surge in demand for quality automation solutions. So we have seen a spike in our sales, and overall we have also seen an awareness on our manufacturers to actually automate their quality assurance processes. So today when we go to a manufacturer and explain to them why they should buy the Relimetrics solution, they are keen to make basically a purchase—whether it’s Relimetrics or some other quality automation provider—they are keen on making the move, because they are certain that this is actually necessary for them to be able to stay competitive in this post-pandemic era.

Kenton Williston: That all makes sense. And one thing I hadn’t thought about before that you mentioned which is really interesting is the idea of doing some of this work with human assistance, but remote human assistance—and then of course things that are fully automated. I would think, in both of these cases, AI would be pretty important—either to do the job just by itself, or to be able to provide remote operators with a more streamlined, summarized view of what’s happening, rather than having to replicate what they would be doing in person. So, how do you see AI factoring into all of these changes?

Kemal Levi: So, there has been traditional computer vision being implemented on shop floors. Now traditional computer vision has been failing in adapting to high production variability use cases. As I stated earlier, this is presenting a big challenge, with increasing quest for customization in manufacturing. Now, the advantage of using AI in addition to a traditional computer vision—like boosting basically computer vision with AI—allows for algorithms to quickly adapt learning to new parts and configurations, and overall any reconfiguring is done much faster. So this is really presenting basically opportunities to reduce downtime associated with the reconfiguring of a quality automation solution. So the adaptability of an AI-based solution is way more adaptable than a traditional computer vision–based solution.

Kenton Williston: Right. And that’s presumably because the traditional solutions would be based around complex algorithms that would be custom tailored to whatever it was you were trying to inspect. So you would have to do very specific thinking about, say, for example, if you were looking for pits, or scars, or whatever in the product—if you built a system to do that, it would only be really good at doing that one task. And if you even changed, say, the size and shape of the objects you were looking at, that algorithm might break. Whereas AI, you can more rapidly update. Is that a good summary?

Kemal Levi: Plus, I think an important thing to note is that with basically rule-based algorithms, you have to show the system every possible configuration. And that is impossible with the sheer amount of customization experience today in manufacturing. AI has an ability to adapt faster to different configurations, different products, because you don’t need to have the system learn every configuration. So, for example, let’s take the work that we are doing at HP today, where we inspect different HP server families. So every product that we inspect is different. This just by itself is actually presenting a big challenge. But there’s even a bigger challenge, because there are also variations in components that are going into each server. There are vendor-to-vendor variations. So with the AI, you have basically an algorithm that is able to better adapt to these differences in configurations. Whereas in the case of traditional computer vision you need to teach the system every possible configuration.

Kenton Williston: Got it. That makes sense. I am curious, though, of course—everyone’s talking about AI these days. You can hardly go anywhere in the tech sector without hearing about it. So, what is Relimetrics doing that is different when it comes to quality assurance, and how does your approach to AI differ from whatever else is out there?

Kemal Levi: In summary, we are changing the paradigm of inflexible part use and reconfigure quality automation systems with smart, connected, and autonomous QA. So we are truly enabling a zero-defect manufacturing. So our product is an industrial-grade framework for anyone without any coding or deep-learning expertise to perform AI-based mission vision and quickly deploy train models inline for real-time inspection on the shop floor. And they can also scale this across different inspection sites. So we enable manufacturers to perform AI-based machine vision without constantly worrying about retraining their models offline each time they have new scenarios or configurations in their production. We also differ from our competitors with guaranteed detection accuracy and overall faster time-to-value on the shelf floor.

Kenton Williston: Interesting. It sounds like one of the big challenges that companies are facing in deploying advanced AI-based QA automation is just the difficulty of deploying it. So can you tell me a little bit more about what the challenges are there, and how are you overcoming those challenges?

Kemal Levi: Together with ABI Research, we developed a maturity model that articulates different stages of quality automation maturity as a company moves from manual to fully automated quality assurance inspections. Now this maturity model helps companies assess where they are today, and what they need to do to move up the automation-maturity curve. Now, based on this maturity model and market assessment, there is still a very large proportion of manufacturers at the very beginning stage of automation. Around one-sixth basically still have quality inspection processes mostly being conducted in a manual process. Now, that said, adoption of QA automation is expected to accelerate rapidly. Most of our customers are highlighting QA automation and robotics integrated, also quality automation as investment priorities over the next two years. Now, from my perspective, ease of use and integration are critical for QA-automation adoption. And this is a recurring issue with quality automation systems. They have required complex integration services over the past decade. I mean it’s—as a result for adoption to actually be accelerated, there’s need for systems that are easy to use and also easy to integrate.

Kenton Williston: Got it. I want to come back to those points about the ease of use, the ease of integration. But before I do that, I do want to come back to another point you made, which is about the level of performance you’re offering with a guaranteed rate of accuracy. How are you able to offer that when other folks are not?

Kemal Levi: There are just a number of things that we do differently with respect to competitors. So, first of all, one key advantage of Relimetrics’ solution is its ability to provide the customers with inline retraining. So, using Relimetrics’ technology, customers can easily monitor what is happening in their production line. And if there are any disputes by the quality operator, or if there are any false detections, they have basically an easy-to-use interface managed centrally by admins who basically have the authority to check whether something is being correctly disputed or not—to basically verify if the disputes are correct or not. And then, if there’s a correct dispute, they can actually revise the training algorithm. So they can retrain algorithms, and using this inline retraining technology, we are able to assure our customers with over a 99.9% detection accuracy across all the components we inspect.

Kenton Williston: Wow, that’s pretty cool. And it sounds like a key part of how you’re able to do that is actually related to the ease of deployment and ease of use of your tool. That’s actually, it turns out, critical to the accuracy as well. So I want to talk a little bit more about that. I would like to hear more about what it is you’re doing exactly that makes your tool easier to use. And, also, the point you mentioned about integration—can you tell me a little bit also about how you integrate with other third-party systems, and why this is important?

Kemal Levi: Well, I mean, I think one advantage of working with Relimetrics, from a manufacturer’s point of view, is that we really understand their pain points, because we started our journey as a systems integrator. We are not the typical software provider who doesn’t really understand what really happens basically on the shop floor. So we are not just looking at things from a software perspective. We also understand the hardware requirements on the shop floor as well.

Now, technology-wise, Relimetrics is use case agnostic—our technology is not limited to manufacturing, let’s say, how many use cases. But when it comes to manufacturing and assembly, we really truly understand the pain points of our customers. So we do know that what really matters to our customers is whether they are going to be able to integrate our solution with different hardware, and also with their shuffler. So this is an inline solution. It is not a standalone solution. It is not an R&D solution. So, at the end of the day, it has to be fully integrated.

We do provide the customer with a full, end-to-end solution that integrates with their lines. It is a fact, that for maximum savings to be provided to a customer, quality automation solutions have to be integrated with other software applications used by the manufacturer—such as manufacturing execution systems and enterprise resource planning systems. This is the only way to really be able to provide the staff with an understanding of their production lines in real time.

How do we integrate with these third-party systems? We do this with fully standardized interfaces. We utilize our own software product, called Node Editor, which eases the pain of systems integration on the shuffler, and enables anyone to be able to quickly integrate our solution with other shuffler software, without writing a single line of code. So this really eases the pain of integration for our customers.

And we also provide the customer with different hardware integration options as well. So this ranges all the way from an industrial-grade camera, to a gantry robot, or a robotic-integrated solution. So we have all the required interfaces, for example, for robotic communication as well. So one software can be used across different use cases for both training and also quality monitoring purposes; it can also do data analytics. It also gives the customer the ability to control different hardware, to manipulate an acquisition, a data-acquisition system.

Kenton Williston: So one thing that stands out to me there in all of what you just said is that there’s going to be a lot of necessity for real-time data. So, obviously, if you’re doing something like controlling a robot, for example, that’s working on an assembly line, or something like that, you need to have any of the control loop happening in real time. Presumably there are also other areas where it’s important to do all those communications in real time. So I’d love to hear more about what those use cases are, and really, even more broadly beyond that, what you see as the best ways to use data generally that’s coming out of these QA automation systems to improve the overall performance and efficiency of a manufacturing line.

Kemal Levi: Well, one: using machine learning for managing a production process like quality assurance—a key challenge to be sold is the amount of data produced by the cameras for inspection. So, for example, let’s take our implementation with HPE and Foxconn. We have a total of seven industrial-grade cameras to capture extremely detailed images of every server. So these cameras generate quite a large load of bytes, of image data per second. It would be impractical to transfer that data via internal or external networks to be processed on remote service, because the latency would be too high and the networks will be overloaded with these data volumes. So, therefore, I mean, we deploy our software on Edge systems, such as HPE Converged Edge Systems, for example. So these are robot compact systems delivering enterprise, great IT capabilities at the Edge, in close proximity to the data source.

Kenton Williston: So I think what I’m hearing is that there’s a couple different elements of data utilization. So, one is within your system itself—being able to process the video feeds at the Edge, make decisions, and then provide information about what that QA system is observing in real time. Then the other piece of that is being able to take the output from your QA system and use that to improve the overall efficiency and performance of what’s happening on the shop floor. So I’d like to hear a little bit more about that second part—of how that data can be used to inform the bigger picture of operations.

Kemal Levi: Being able to monitor the data is really critical for the success of our customers. Now, overall, what we offer the customers is a holistic approach to data—where machine and process data are collected and correlated against digitized quality data. This approach Relimetrics is offering combines automated defect and geometry analysis with data-correlation approaches to prevent quality drifts that happen during production. And, by automating inspections and using the digitized quality data, we help companies cut costs and improve productivity. So far, with our existing customers, we have showcased that overall productivity improvements range between 50% to 80%.

Kenton Williston: Well, that’s a pretty impressive jump. What I’d love to hear, if you go a little further there, is if you can talk to some specific use cases that you’ve been involved in in different industries, and what exactly you were able to do to help your customers succeed.

Kemal Levi: Well, we have many use cases we address with the Relimetrics software—the same software product. So what really excites me is the applicability of the same software package across different use cases. So, in a way, I mean, Relimetrics is an industrial-grade Adobe Photoshop, which can be used by anyone to train AI-based machine vision algorithms without writing a single line of code. So there’s quite a bit of a mashup of the work we are doing with HPE in the media. We are the global video-analytics vendor of HPE, helping them with digitizing QA of their servers across their contract-manufacturer ecosystem. So we have been working with HPE for quite some time now.

So the very first implementation that we took was at Foxconn factory in Europe, where we helped Foxconn with full digitalization of QA across different HPE server families. So this is very exciting work, which is scaling at the moment globally to other contract manufacturers of HPE. We are very active in the electronics assembly at the moment. In addition to the work that we do in electronics assembly, we are also working with Lockheed and the aerospace industry—helping them with very exciting advanced manufacturing use cases, where we are actually enabling AI-driven manufacturing. Which is a very exciting use case, because there we are really going beyond quality automation—we are actually using the digital twin, which is utilized to guide autonomous manufacturing activities—whether it’s, let’s say, drilling or riveting. And we are quite active in the energy industry as well. Today we are working with Siemens Gamesa helping them with digitization of phased-array ultrasound data using the real metric software. So this is an exciting use case for us as well, where we showcase the applicability of Relimetrics software to a different image modality.

We are also working with customers where Relimetrics software is being adapted to detecting anomalies in X-ray images. This is really exciting because many of these customers already have existing X-ray systems, and they either have quality operators reading anomalies in the images, or an existing, traditional computer vision–based automated optical inspection system assessing the quality of the images. But these are providing the customers with a poor detection accuracy, so they are bringing in Relimetrics technology to boost the detection accuracy, and also enable them to really think about full digitalization of QA.

Kenton Williston: Well, that’s really cool. And, I have to say, this is very interesting to get this picture into everything that’s happening, and how different what you’re doing is from what people have been doing in the past—even with everything that traditional computer vision systems can do, it just seems like a huge step change here, and that makes me think about what might be coming in the future. So what do you see coming, in, let’s say, the next 10 years in QA automation, and how are you preparing for that?

Kemal Levi: The overall goal in QA automation is the full digitalization of QA. The issue at the moment is that existing quality automation systems are insufficient to think about full digitalization of QA. So I think in 10 years it’s going to be: QA automation will be fully digitized, thanks to AI. There’s also another angle to the story is that is, for example, in the case of assembly—it is the automation of the assembly using robots that basically also incorporate AI in a very intelligent way. They are able to actually perform the assembly, and also run the inspection on the parts that basically they assembled. So I think the next 10 years in QA automation are actually going to be drastically different than where we are today at the moment. Definitely, the push is for AI-driven manufacturing at the moment, and Relimetrics-like technologies will be enablers for this type of robotic system to succeed.

Kenton Williston: Got it. One key thing that we really haven’t touched on that I think is important to surface here. You talked a little bit about how your systems run at the Edge, often on HP servers. Presumably, underneath the hood there you’ve got some Intel technology, and I’m curious how your partnership with Intel has added value to your system. And, also, as you’re talking about these things, looking forward, how you see that relationship with Intel evolving and helping you go forward to the future you described.

Kemal Levi: Well, I think overall our success is made possible by using advanced Intel Edge computing technologies. So, specifically, we today use servers based on powerful Intel Xeon processors, and Intel provides the software and architecture that enables Relimetrics to make the best use of basically CPUs. So this technology is critical to our quality assurance solution, because it enables us to run computationally intensive calculations at high speed.

Kenton Williston: And, presumably, in terms of looking forward, you’ve got a lot staked on Intel’s roadmap. And they just released this year, for example, the Ice Lake platform, which has a lot of AI accelerators. And I would imagine you’re excited about using those.

Kemal Levi: I am very excited about, actually, what Intel is currently working on. So another thing of being an Intel-certified solution is the potential business opportunities that Intel also brings to us. So, on the marketing side, this is definitely just a very powerful way to showcase that Relimetrics solution is a certified solution. So that’s one advantage. But another advantage is Intel teams have been very open in helping us out and bringing us in front of their customers—big accounts of Intel. So I’m really excited about actually the work they are currently doing together with us. So I think there are really great opportunities for us to collaborate on.

Kenton Williston: Great. Well, I can’t wait to see where things go next with that. As we’re getting close to the end of our time together, are there any key takeaways you’d like to leave with our audience?

Kemal Levi: It’s really important for our audience to really be aware that AI is coming very strong. I think the pace of change in the world is at a much more accelerated pace than it used to be before, and I think there are those who still think that, for many use cases, AI-based mission vision is really not going to be successful. And we experienced this—many times there are basically questions posed by academics, there are many adopts. But we are all proving them wrong. AI-based mission vision really operates at a stellar performance—way better than traditional computer vision today. I mean, this is—I mean, these are the facts on the ground, and I think overall where we are heading is going to be the full digitalization of QA. So companies should really act fast in making sure that they really operate in an efficient way, and think about going basically to an advanced level in their quality automation journey.

Kenton Williston: Excellent. Well, I can’t wait to see all of this transpire, and, until then, I just want to say, thanks again, Kemal, for joining us.

Kemal Levi: Thank you very much for your time.

Kenton Williston: And thanks to our listeners for joining us. To keep up with the latest from Relimetrics, follow them on Twitter @relimetrics and on LinkedIn at linkedin.com/company/relimetrics

If you enjoyed listening, please support us by subscribing and rating us on your favorite podcast app.

This has been the IoT Chat. We’ll be back next time with more ideas from industry leaders at the forefront of IoT design.

AI and CV Boost Digital Signage Results

Businesses have used digital signs to display messages for years, but have those messages been effective? Until recently, no one knew.

Now, Computer Vision (CV) and AI technology can help answer that question—and do much more. Picture digital signage that not only shows whether customers are paying attention to a display but whether they are engaged with or indifferent to its content. Or a camera that analyzes body language in real time to warn store security of suspicious behavior. Or even one that warns “Stop!” if someone lights up a cigarette while pumping gas.

These are just a few of the applications that have emerged as digital signage systems get smarter. As adoption of the technology grows, it is creating a wealth of new opportunities for retail businesses and systems integrators alike.

Personalizing Digital Signage Content

It’s no wonder that CV- and AI-enabled displays are a boon for retail, where customers increasingly expect a personalized experience. An Accenture study found that 91% of consumers are more likely to shop with brands that provide relevant offers and recommendations.

Early on, algorithms simply counted viewers, then—becoming more sophisticated—could distinguish gender and relative age, for example. “Now displays can see if someone watches an ad to the end or if they walk away. They can recognize a brand logo—such as Puma—on someone’s shirt, and then, for example, offer them a discount code on the new Puma brand shoes,” says Raffi Vartian, vice president of business development and strategic partnerships at AI technology company meldCX.

CV-enabled signs process incoming viewer metadata at the edge in real time, automatically changing content to suit their preferences and creating anonymous user story data for additional analysis.

As an example, for a young couple, a bank might display mortgage information, while customers in their 20s will see images of adventure travel. An older person carrying an expensive handbag might be shown information about retirement savings, tax savings, or wealth management.

And with meldCX Viana Vision Analytics, data is first inferenced at the edge, then securely transmitted to the cloud, where it is further analyzed to show banks, grocery stores, and other businesses specific information that reveals their content’s effectiveness (Video 1).

Video 1. The Viana Vision Analytics allows retailers to gauge the effectiveness of digital-sign content. (Source: meldCX)

To maintain privacy and GDPR compliance, the platform stores no personal information at the edge, and only metadata—not video feeds—are transferred to the cloud. “The only information that goes to the cloud is ones and zeros,” Vartian says.

Showing the right customers the right content at the right time helps businesses get more bang for the buck from their #DigitalSignage. #meldcx @insightdottech

Diverse Use Cases for Digital Displays

Showing the right customers the right content at the right time helps businesses get more bang for the buck from their digital signs. One of Australia’s largest banks has invested a lot into digitizing its in-branch experience by providing meaningful and relevant content on its digital signage network. But the bank struggled to demonstrate value and ROI because it is difficult to measure content engagement on digital signage.

Viana generated monthly playback reports containing insights such as top personas, busiest time of day, content rankings, and content effectiveness ratios. As a result, the bank has been able to create more strategic and effective campaigns, resulting in an 87% increase in customer engagement over a three-month period.

Expanding Opportunities for SIs

As the use of computer vision grows, it is providing a flood of new opportunities for SIs, who can install Viana on customers’ existing security cameras. “We can deploy our technology into about 90% of cameras on the market,” Vartian says.

Specialized knowledge isn’t necessary. “We’ve tried to make it as simple as possible, for both retailers and their SIs, with one-click-install software, built in dashboards, and real-time APIs for the more advanced use cases,” he adds.

Today, computer vision systems are being developed for dozens of uses outside the retail environment – including self-service. For example, customers at Australia Post no longer have to wait in line and talk to a representative to mail packages.

Instead, they drop parcels on a scale, where a meldCX solution measures their size and weight and verifies the sender’s identity, the recipient’s address, and the shipping cost. “The computers even recognize sloppy handwriting in terrible lighting conditions. We used 1.7 billion data sets to show them how to do it,” says Vartian.

The technology can also be used to improve workplace operations. For a warehouse, meldCX created a system that alerts packers to missing items and gathers information about productivity. “Are there specific colors of products that may trick the eye? Do people in a warmer part of the warehouse tire out faster than others or make more errors? Small changes can make a big impact,” Vartian says.

To train algorithms for specialized needs, meldCX uses the Intel® OpenVINO Toolkit, which allows developers to easily export code to new models. “Intel® has gone to the mountain and brought out the ore, so we don’t have to spend our time prospecting and mining, we spend our time refining the platform and identifying use cases that require customized training,” says Vartian. “It saves us an enormous amount of time.”

As people start to see AI-based cameras in more places, demand for customized solutions will grow, Vartian predicts: “The cameras are already there—you just have to make them smart. The applications are virtually unlimited.”