QSRs—Voice AI Will Now Take Your Order: With Sodaclick

Join us for our very first episode of “insight.tech Talk” where we discuss how voice AI transforms the QSR experience—boosting efficiency and creating a smoother experience both for customers and employees.

Just as our new name reflects the ever-changing tech landscape, this episode explores how voice assistants enable QSRs to take orders faster and more accurately, reducing staff workload and handling complex requests. The result: shorter lines, happier customers, and more consistent service.

Listen in as we explore benefits, address potential challenges, and peek into how voice AI impacts other areas of the industry.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guests: Sodaclick and Intel

Our guests this episode are Salwa Al-Tahan, Research and Marketing Executive for Sodaclick, a digital content and AI experience provider; and Stevan Dragas, EMEA Digital Signage Segment Manager for Intel. At Sodaclick, Salwa focuses on bringing awareness about the benefits of conversational AI across all industries. Stevan has been with Intel for more than 24 years, where he works to drive development of EMEA digital signage and display benefits of Intel tools and technologies.

Podcast Topics

Salwa and Stevan answer our questions about:

  • 6:02 – How voice AI enhances QSR experiences
  • 12:53 – Voice AI infrastructure and investments
  • 15:08 – Technological advancements making voice AI possible
  • 20:15 – Real-world examples of voice AI in QSRs
  • 24:57 – Voice AI opportunities beyond QSRs

Related Content

To learn more about conversational voice AI, read Talking Up Self-Serve Patient Check-In Kiosks in Healthcare. For the latest innovations from Sodaclick, follow them on Twitter at @sodaclick and on LinkedIn. For the latest innovations from Intel, follow them on Twitter at @Intel and on LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to the “insight.tech Talk.” I’m your host, Christina Cardoza, Editorial Director of insight.tech. And some of our long-term listeners probably have already picked up that we’ve updated our name from the IoT Chat to “insight.tech Talk,” and that’s because, as you know, this technology space is moving incredibly fast, and we wanted to reflect the conversations that we will be having beyond IoT. But don’t worry, you’ll still be getting the same high-quality conversations around IoT technology, trends, and latest innovations. This just allows us to keep up with the pace of the industry.

So, without further ado, I want to get into today’s conversation, in which we’re going to be talking about voice AI in quick-service restaurants with Sodaclick and Intel. So, as always, before we jump into the conversation, let’s get to know our guests. Salwa from Sodaclick, I’ll start with you. What can you tell us about yourself and Sodaclick?

Salwa Al-Tahan: Hi, Christina. Thank you. So I’m Salwa Al-Tahan, Head of Marketing and Research at Sodaclick. Thank you for inviting me to join this podcast. So, Sodaclick is a London-based AI company. We actually started, for those that don’t know, in 2017 as a digital-content platform. But AI was always part of their vision. And in 2019 they actually opened up the AI division, and that was primarily focusing on voice AI, although that was quite linear, it was command-driven. And they always knew that it needed to be more natural, more human-like, more conversational.

So, the co-founders are really hot on being at the forefront of technology, always innovating, always looking to improve. And, with the advent of generative AI, they started fine-tuning their LLM, and that’s where we are now. Now we’re a London-based company with a global presence.

Christina Cardoza: Great! Looking forward to digging into some of that. Especially making voice AI more natural, because I’m sure a lot of people have had the displeasure of those customer service voice AI chatbots that you’re always screaming at on the phone, trying to get it to understand you, or trying to get where you need to go and trying to talk to a human. So, looking forward to how that’s not only being brought into the restaurant space, but I know Sodaclick does things in many other industries. So we’ll dig into that a little bit in our conversation.

But before we get there, Stevan, welcome to the show. What can you tell us about yourself?

Stevan Dragas: That’s interesting. So, Stevan Dragas. Why it’s interesting is because over the last 24 years in Intel, I’ve done so many exciting roles and positions. And on the recent visit, where I had the pleasure of taking Sodaclick to join Intel Vision in U.S., Ibrahim, one of the founders of Sodaclick, actually reminded me that at the time, in 2019, when they moved into the voice, that that was the first time he met me. I kind of, unfortunately, forgot that. But he reminded me that that time we met for the first time, and I kind of gave them some hints, advice, what would work, what did not work. And, unfortunately have to say, I’m almost glad that they listened to me at that time.

Because with what we are doing, what Sodaclick at the moment is doing, we cover both edge, from the edge to the cloud, and driving ultimately the new usage models, driving user experience, driving benefits starting from the end user to retailer to the operator of QSR. But ultimately we are driving new experience in usage models and benefits, and changing the industries.

Now, my role is basically to promote and support Intel platform products, technologies, software, across multiple vertical industries, from which QSR is just one of the vertical industries. So I go horizontal, and I have a number of companies which are just as exciting as Sodaclick, but they’re one of my, let’s say, crown jewels that I am actually pleased and happy that over the last couple of years we really accelerated and will continue.

Specifically because we are now looking into adding some of the new products that Intel actually brought to the market. Not necessarily just the new products that are for the cloud, but also introducing for the first time in the computing industry, to call it, the new products which actually have not just anymore CPU and GPU but also NPU. And in May Sodaclick will be demonstrating and using their product on this new platform. It used to be called Meteor Lake, but it’s actually Core Ultra platform.

And really exciting to work across all of these industries, specifically with Sodaclick. They have been so good, and I’m happy to also say that we are looking to a lot more than just the QSR-type of restaurants, because many industries and vertical industries’ solutions would benefit from some kind of conversational discussion from asking an open question, rather than pre-scripted, menu-driven, type of conversation with the machine.

Salwa Al-Tahan: Yeah, command-driven is so linear and boring. And, like you say, frustrating to customers as well. These natural interactions with the conversational voice AI is definitely the future and the way it is being deployed at the moment.

Christina Cardoza: Yeah, absolutely. So, let’s get into that a little bit. Specifically, looking at the quick-service restaurants: how it is being deployed and used in those areas. Salwa, if you can give us a little bit more what Sodaclick is doing in this space? And how you’re seeing voice AI improve or enhance the QSR experiences?

Salwa Al-Tahan: There’s two aspects to the QSR industry which are benefiting from the integration of voice AI. One is in-store. So, we’re seeing a handful, I would say, of QSR brands actually integrating voice AI into their in-store kiosk to make them truly omnichannel. The other aspect is at the drive-through. So, it becomes the first interaction for a customer as they drive up to the order-taking point—you’ve got your conversational voice AI assistant there. These are the two main focuses at the moment.

And, to be honest, each one comes with its different benefits, I would say. And its different benefits both to the business and to the customer. So, at the in-store kiosk it’s faster. If you think about, if you know exactly what you want going up to a kiosk, having to scroll through the UI, adding extra lettuce, or removing cheese, or these little things—no ice—you actually have to scroll through and it takes time. Whereas it’s faster for you just to say it. And that faster interaction means that you can get a faster throughput as well. You can serve more customers, reduce wait times.

Also, in in-store kiosks, it becomes more inclusive. Having voice AI as an option to customers means that any customers with visual impairments, physical impairments, sensory processing disorders, even the elderly who struggle with accurately touching those touch points to place an order—it becomes much more inclusive to them. They’re able to use their voice for that interaction. So these are some key benefits obviously, as well as upselling opportunities.

At the drive-through it’s a completely different interaction. It’s again polite, it’s friendly, it’s allowing businesses to unify their brand image with excellent customer service. It’s improving that order accuracy. I know from the QSR annual report for their drive-throughs in 2023, order accuracy improved by 1%. It was 85% in 2022, it moved up to 86% in 2023. With voice AI we’re actually able to bring that up to 96%-plus.

And that is because at the order point it’s quite a repetitive task for members of staff. They’re just constantly doing the same thing. That means that sometimes, unfortunately, you’re not getting the friendly customer service, you’re not getting that bubbly person at the end of their shift. Humans are humans, though. They might be having a bad day. They might not have all the product information that you’re after.

Whereas with the conversation voice AI model, we’re able to consistently give polite, friendly customer service—a warm, human-like interaction. We’re actually able to bring in voices that are neural voices, which are so human-like, most people wouldn’t even know that they’re talking to an AI. We’re able to offer it in 96 languages and variants, which means that you are able to serve a wider community within the area as well, without any order inaccuracies of mishearing something, or asking them to repeat themselves. Language is another really big factor, both in-store and at the drive-through.

Stevan Dragas: Salwa, if I may add, it increases—

Salwa Al-Tahan: Of course, please do.

Stevan Dragas: From working very closely with Sodaclick, it also increases greatly the accuracy. It removes the necessity from the operator to necessarily closely listen and try to understand, but the same time reduces the time to delivery, because the moment when you are already having three, four articles listed on the screen, the operator can start already making the order, working on the product, rather than listening for the complete order to be finished. Because the technology is now stepping between, helping both sides.

Salwa Al-Tahan: Absolutely, it’s streamlining operations, both for the business and for the customer. So you’re absolutely right, Stevan, that it’s a benefit to both. And also alleviating that pressure on members of staff as well. So it can all, like you say, it can be stressful as well, inputting all of that information. And although it is repetitive, it can be stressful, especially if you’ve got a large queue. People honking their horns and—they just want their food fast, and that’s what it is all about in the QSR industry, getting your food fast.

So by being able to improve order accuracy, it has that knock-on effect of the other benefits: streamlining it for both the business and the customer, but also increasing speed of service and quality of service as well. And it allows members of staff that, from being taken away from that position at the order point, we’re not actually removing a member of staff, we’re just repurposing them into the kitchen so that they can focus, exactly like you say, on preparing the orders, on other pressing tasks that might be needed in-store. And also improving the quality of customer service.

Christina Cardoza: Yeah. As a customer myself—I guess in preparation to this conversation a little bit, I went through a drive-through, quick service restaurant last night. And I used the app before I left the house and ordered my food, and then went through the drive-through to do it, but I wanted a sandwich with pickles on it and, like you said, I didn’t want to go through the app and figure out how to add pickles to it, but then I also didn’t want to drive through and talk to an employee, because then—just my own thing—I feel embarrassed, or that I’m being a difficult customer, asking for these modifications and customization. So if it was an AI I was talking to, I would’ve been much more comfortable to order the sandwich that I wanted.

And, to your point, it’s that customer experience, but I’m curious—you talked a little bit on the business level, the benefits that they get and how they can redistribute their employees elsewhere. How can we actually implement this voice AI? What is Sodaclick doing to add this on to, maybe, the technology that’s already in there? Or is there investments that have to be made to the infrastructure to start bringing voice AI to the business and to the customers?

Salwa Al-Tahan: So, actually, if a QSR doesn’t already have this—the technology—already, we can work with them and integrate into their existing technology.

Stevan Dragas: So, if I may add to that existing point: customer-interaction points, where they either interact or make purchase orders in existing stores, or even if it’s drive-through, what Sodaclick from the technology side brings is the microphone, which is a cone microphone, which focuses in very noisy environments to the person. And it’s actually doing that with the new algorithms developed with Sodaclick, driving very, very high percent of the accuracy. But not only accuracy of the person, but also recognition on the accents and different words. In the same environment, there could be multiple languages.

From the technology side, they also integrate with the APIs, with the stock of the products, directly integrate with the products available—but not only available, they integrate with the analyzing of existing products. For instance, are they protein-rich? Are they rich in some other minerals or whatever? Again, specifically now talking about QSRs. And from the technical side also, they look into what the existing infrastructure is. Maybe the existing infrastructure is enough. Or maybe they need, so-called, a little bit more horsepower, in which case just the computing part needs to be up-leveled to be able to process all the information in order to drive this near real-time conversational kind of usage model.

Christina Cardoza: Great. And, Stevan, you mentioned some of the Intel technologies that are coming out to help do this more. Because I’m assuming, like you mentioned, there’s a microphone, then we have cameras, there’s algorithms all happening at the backend to make sure that the software can accurately understand what the customer is saying and be able to put that all down and get their order right. So, what are some of the technological advancements coming that make this so that it’s fast, it’s accurate, it’s real time, that it’s natural? How is Intel technology making this all possible and helping companies like Sodaclick bring this to market?

Stevan Dragas: So, there are a couple of things that actually directly play on the technology side. One is effectively physics. In order to drive real-time or near real-life-experience conversational experience of the users, of the customers, decision-making needs to be done at the edge. Processing, running of those LLM models need to be done at the source—at the source, which is the edge-integration point, the communication point.

And Intel has recently introduced—and this is an industry first—new processors which have now three cores. And they are all in the same chip, which is basically CPU as it traditionally was, GPU on top of it, and then NPU. NPU is effectively narrow processing unit, which effectively enables AI decision-making being done at the core, at the edge.

So, the Core UItra platform products are something that are coming out. There are already a number of them available in the market, but they will be even more widespread with driving this AI user experience, conversational AI. On the other hand, there are a number of products which are for the cloud, for the edge, for the server, but ultimately when I said physics, you literally have latency between transmitting data from the point where you make the order, where you conversate, and you don’t really have time.

I don’t know if others are like myself; I am not very patient. Sal, you are laughing because you know me. But ultimately, if you need to say something and then wait for a couple of seconds for that message to be transferred to the data center, or to the cloud, or somewhere away, and then the response needs to come back—normally I go without lunch if there is a queue in the line. But that simply may be me.

But ultimately if you want to have a conversation, conversational AI, that needs to be real, as long as that processing is happening at the edge, and this is what Intel is bringing—bringing not only products, but also as Intel® Tiber edge platform, then OpenVINO framework, which Sodaclick is using. So ultimately not necessarily just doing the technology for the sake of technology, but using the technology to enable usage models, to enable experience, to drive the smile, to drive the repeat return to the either same environment or the similar environments, to literally break out of the box of traditional “read the menu and repeat what is said.” Or if you don’t read, I don’t understand. So this is where basically Sodaclick is coming in with their software solution.

Salwa Al-Tahan: Just like Stevan was mentioning, I think what a lot of brands were doing at the drive-through order point was reducing their menus. But with conversational voice AI you can actually still have that full menu and have your customers interact with that and choose even maybe new favorites, with the opportunities for upsells. And it’s a lot more intuitive as well. And, like Stevan was saying, using OpenVINO, it means that we’re able to create the solution and then scale it across the brand.

Stevan Dragas: Even to add to that, when I mentioned a couple of times all the user experience, imagine if you are basically a return customer. And maybe there is a loyalty program, maybe, I don’t know, some special. And imagine you come back there, and rather than having to go through your three, four, five items, how about the sign says, “Hey, welcome back, Christina!” All clearly because you either tapped your card, so it knows who you are, and it says, “Hey Christina, shall we have the same, like your favorites?” Or something like that.

So it automatically, even for you—oh yeah, I don’t need to go through the pain of repeating everything. It already knows, and it suggests, and as Sal mentioned, maybe it can actually focus on maybe upsell. Say, “Hey, how about would you like to try some new product? Do you want to experiment?” Or ultimately there is even the option of detecting the facial expression and pretty much trying to drive: a happy customer is a good customer, is ultimately buying more.

Christina Cardoza: Yeah, absolutely. To your point too, if the machine can recognize who you are and what your order has been, and there was maybe a limited-time offer or a new menu item that came out that is similar to what you ordered, they can also give those personalized recommendations: “Would you like to try this?” So this all sounds really great and interesting.

Salwa, I’m curious, do you have any customer examples or use cases of this actually in action that you can share with us?

Salwa Al-Tahan: Yeah, absolutely. So, we’ve been working with Oliver’s, which is an Australian brand. They’ve got over 200 stores, both in-stores and drive-throughs. And we’ve deployed the conversation voice AI in their in-store kiosks and also at the drive-through. It’s actually been really, really exciting working with Oliver’s, because they were on a completely new digital transformation journey. So we’ve been with them along that way, including their digital signage.

And what was really cool about Oliver’s is, although it’s English, we’ve been able to create the persona of the AI assistant to be very Australian. And he’s got his own personality: he’s called Ollie. And he understands Australian, the slang words; he’ll sort of greet you with “G’day, mate!” and “Cheers!” And just in a very natural way to the local customers. And that’s been really, really cool.

The other great thing about working with Oliver’s was their requirements, their KPI, was quite different to working with, let’s say, KFC, who we also work with. They actually, because they are a healthy fast food chain, they know their customers are interested to know ingredients lists; they want to know calorie count and product—like Stevan mentioned—protein information and things like that. So we were able to integrate with Prep It, which is their nutritional database, to provide that information for customers in a very quick, accurate, and fast manner. And that’s something else that’s—it’s really cool.

Again, I mentioned, so working with KFC. We’ve been working with KFC in the MENA region, specifically in more locations to come. But we’ve got deployments in Pakistan and Saudi and across the UAE in the different languages. And their requirements were different. They were more focused on speed of service and improving order accuracy. And, again, with conversational voice AI at the drive-through, we were able to achieve that for them. And it’s going well.

Stevan Dragas: So Sal, I don’t know if that’s a public—well, technically, it’s not—but it’s also looking into where else—how to expand. And ultimately it’s not just the QSR industry, but it’s every place where there is need for either information, either core communication or any discussion, any Q&A. Like we at the moment are working together with one of the world’s largest football clubs, where effectively we started conversational, which very quickly got very positive reception on all the different touch points where a conversational AI or Sodaclick solution can be integrated—from entering the venue to integrating in either restaurants or museums, trying to be very sensitive to the name of the place. There are multiple adjacents in vertical industry opportunities where conversational should be and could be; they can do a lot more natural level.

Salwa Al-Tahan: Absolutely. It’s all about engaging users and creating really positive interactions—memorable interactions actually as well. And I think we’re in an age where everyone has such high expectations. They want hyper-personalization, they want interactive experiences. And it’s almost a case of businesses trying to think, “How can I keep up? What innovation can I bring in?” And conversational voice AI is something that is not just a trend; it actually has a use and benefits as well. But it is part of the trend. It is quite hot at the moment. So, yeah.

Christina Cardoza: Yeah, absolutely. And that was going to be my next question. Because I know in the past we worked together—insight.tech and Sodaclick—and we’ve done an article about conversational AI. But it was in a healthcare space: being able to collect information and do things that maybe a receptionist would have done at a patient level so that the doctor could get the information faster—the patient doesn’t have to wait online, anything like that. So I was curious, from your perspective, Salwa, what other opportunities or use cases outside of the QSR, or what other industries do you see voice AI coming to?

Salwa Al-Tahan: Absolutely. So, other than healthcare, I think definitely wayfinding kiosks—airport concierge, for instance. The benefits are that you can have the conversation AI assistant on a kiosk 24 hours a day; you don’t need to have a member of staff manning that. A customer can come in, or a user can come in, and interact with it.

Even if you think about government buildings—anywhere where there’s a check-in, just like Stevan was saying, anywhere where you might need to ask a question, or get information from—at stadiums, where simple things like reducing queue times by having these interactive touch points where a customer can come in, scan their ticket, ask it where they can get some food from, or where directions to where their seat is, or—all of that information. In an airport, asking where the bathrooms are. Or, again, where they can get a coffee from, or they forgot their headphones—where can they buy some headphones from in a busy airport. This is really useful.

And I think there could be some even more exciting opportunities outside of these ones which we haven’t explored yet. It may be in FinTech as well. And I think it’s just a case of reaching out to and sort of seeing wherever there is a need for these personalized interactions for customers to use these too. And also, part of it is providing more of an inclusive world. Again, I go back and say this, but I think it is partly providing a more inclusive world with voice AI as an additional option to touch. So there’s plenty of opportunities to integrate very seamlessly and very—it all needs to be done very frictionlessly.

Stevan Dragas: So, Sal, I’m sure you will agree, because of the previous discussions. It’s interesting to see how some technologies, even looking outside, how long does it take for certain products, technologies, experiences to actually penetrate? We have a number of examples of, let’s say, certain technologies taking X number of years to reach, let’s say, 10 million subscribers. But then as we are going more and more with something which is more, as Salwa mentioned, inclusive, is more natural, how that timeline actually shortens.

And I think with the conversational AI, almost like at some interception point, where effectively we need to drive people to see and experience it. The moment you learn it—if I just look at I still am having difficulties with teaching my mom how to open WhatsApp on the tablet. But at the same time, when my youngest daughter was born, she was not even a year, she already took the phone and she already knew how to move and how to touch and how—to the level that, effectively, once we get exposed to certain usage model experience or technology, then it almost becomes natural.

For my daughter, the interaction, the touch screen, is the starting point for her. While for my mom, it’s still like some alien technology. So the moment you experience something, you kind of demand that from other usage models. Think about, where else do you stand? You stand in front of every hotel when you go to check in, stand in the line and wait for—all of that could be actually done through the simple kiosk where, effectively, “Hey, this is me.” Passport check-ins at the—you can see at the airports. There is a lot more now of those self-check-in lanes, where effectively you don’t need to queue; you can just go through.

So if we start from QSR, moving to retail, expand to hospitality, healthcare—ultimately any vertical industry where there is any need for either conversation or information-sharing. Sal, you mentioned way-finding. Way-finding was great as an innovative kind of usage model. However, if you suddenly need to figure out and touch, and the accuracy of the touch, it can sometimes—if you need to stand in the queue and try to—and you need to know what are you looking for, it takes so long to type in. Rather than just say, “Hey, where can I find a coffee place? Where can I find. . . .”

So suddenly we are not transforming the technology; we’re just bringing a new usage model to the existing technology. And that is actually—which can actually make those products, those usage models, and those vertical industries adopt certain technologies much faster. And I think we are really at the kind of crossroads of these technologies, that once people get exposed to, but ultimately once people get exposed with certain usage models across certain points, they will expect the same, similar, or even better experience across other adjacent industries. And I think it’s just the beginning of the AI, and we are going to certainly see a big boom in these usage models and experiences.

Christina Cardoza: Yeah, that pain point with the parents being able to use technology—that is something that resonates deeply with me. But to your point, the touchscreen and all these devices and these applications, that is something that maybe my generation grew up with, but not my parents’ generation. Conversation, voice, talking—that is something that we all have been doing since we were born, since we can walk; it’s very natural to us. So being able to implement these across these different industries—they’re high technological advancements and innovations, but it’s a much better user experience, and much more accessible to people than a touchscreen or a kiosk. So I think it’s great, and I can’t wait to see what else comes out of all of this.

I know we are running a little bit out of time, so before we go I just want to throw it back to you. Any final thoughts or key takeaways you wanted to leave us with today? Salwa, I’ll start with you.

Salwa Al-Tahan: I think actually, just picking up on what both you and Stevan were saying, I think we’re definitely in the golden age of AI and technology. And it’s not something that we’re talking about anymore that’s in the near-distant future; it’s here, it’s now. It’s deployable, and it’s very natural, because again, like you say, we’ve all been conversing since we were babies. And with the advent of phones and everyone using Alexa and Siri in our homes and on our phones, it’s just the natural progression.

And because of the benefits that it has across industries, not just in the QSR, it’s just something that we will be seeing more of. And it almost, again, like Stevan was saying, it’s almost a case of when one brand leads with it, the others will follow, because they will all see how much it is improving their business, improving their customer experiences. It’s bringing a higher ROI to them. So it’s very much here and now. And it’s very exciting, actually, to be a part of this. So, yeah, there’s definitely a lot coming.

And, again, I just wanted to—for anyone who does have this misconception that voice-AI systems are going to take away jobs and things like that—I just really want to, again, reassure them that it’s not about taking away jobs, but rather augmenting and helping both the businesses and the customers by streamlining the operations to meet those customer expectations of faster, intuitive experiences. And we can do that with conversation and just by repurposing members of staff. So it is never about taking away a human person’s role, but rather giving them purpose somewhere else.

Stevan Dragas: Yeah. And to that point, what I would like people to remember is not to do technology for the sake of technology but because of what it can bring, what it can enable, what it can drive. In Intel, there is an already long saying: “It’s not what we make, it’s what we enable.” And this is one thing that is becoming prevalent and very important going forward. Demand more. Technology is there. Innovation is unstoppable.

And I think from the conversational AI where we started, where we are going now—it’s just beginning, it’s just tip of the iceberg. There is so much more that if you connect the conversational AI—but ultimately if you connect it on the basic principles of what Intel is doing, which is security on every product, connectivity, manageability, so as long as all of that infrastructure, those applications are safe, are manageable, are connected, are something that is also driving sustainability—ultimately all of these connections and all of these technology points that people integrate, collaborate, talk to, integrate—ultimately all of this can be actually driven in a lot more sustainable way across many vertical industries. And this is just the beginning for Sodaclick, in my personal view.

Salwa Al-Tahan: Absolutely. I mean, all of these core values resonate with Sodaclick’s values as well. And we can pass those benefits on to the customer as well. So, like you say, it’s just the beginning, but it’s definitely very exciting.

Christina Cardoza: Absolutely. I can’t wait to see what else Sodaclick does with Intel. So I just want to thank you both again for joining the conversation and for the insightful thoughts. And I invite our listeners to visit Sodaclick, visit Intel; see what they can do for you and how they can help you guys enhance your businesses. So, thank you guys again. Until next time, this has been the insight.tech Talk.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Healthcare AI Solutions Ease Nursing Duties

I recently visited a healthcare facility for a routine medical procedure and might have inadvertently overtasked the attending nurse: Because I was freezing cold, I ended up asking for a warm blanket three separate times. I am not the only one who finds that bedside call button all too tempting.

While nurses’ jobs are to provide quality patient care, more often than not they are the first line of defense for all patient needs—whether that’s extra blankets, a pillow, or a glass of water. Such requests are not the best use of nurses’ time. “If you use a nursing call bell, the nurse needs to walk to your bed and then deal with your request, return to their station, and then coordinate with the nutrition or maintenance department,” says Paulo Pinheiro, CEO and co-founder of HOOBOX Robotics, a developer of medical optimization solutions.

Given the high rates of burnout and staff shortages among nurses, medical facilities are doing their best to optimize nursing resources. HOOBOX Neonpass, a smartphone app-based solution, addresses these inefficiencies. Looking for ways to use AI in healthcare, HOOBOX developed Neonpass to meet patients’ needs without overburdening nurses. The application recognizes and routes requests to the right departments in the medical facility, bypassing the nursing station when necessary. The app-driven demand and delivery method has earned Neonpass the moniker “DoorDash for Hospitals.”

AI in Healthcare Optimizes Workloads

Using Neonpass enables not just patient communication with professional staff but also routes messages between nurses and other connected departments in a digital format. Where rigid protocols once mandated nurses call other departments in the hospital, medical facilities can now rely on the digital platform to send messages. For example, instead of calling a diet change into nutrition—with a potential for miscommunication—nurses can input the change directly into the Neonpass solution. “With Neonpass you digitalize all the information and nutrition receives an alert; it’s much more efficient and less error-prone than a phone call,” Pinheiro says.

Neonpass includes three AI modules. The first one detects patients’ anomalous behavior on the assumption that messages from patients can serve as a window to underlying medical needs. Frequent requests for water, for example, might indicate a physical problem so Neonpass can alert nurses to check in on the patient earlier than planned, for effective intervention.

“AI will analyze the last medication taken, procedures, exam, and will give a risk score so nurses can gauge severity and prioritize visits,” Pinheiro says. The AI is sophisticated enough to understand that different medications or procedures can trigger events that might otherwise be characterized as abnormal and factors these parameters into its risk score.

Another AI module evaluates the patient’s use of the chatbot embedded in Neonpass for mental health challenges. The module can detect if the user is feeling lonely or suicidal and alert staff accordingly.

The final module delivers generative AI trained on large language models from the individual hospitals. Using Neonpass, professionals can verify safety and fall prevention protocols, for example. The solution complements existing training programs for medical professionals, who can study for certification courses using Neonpass.

AI-driven optimization also delivers business insights through a common platform so management can use information to optimize staffing depending on cyclical demand and even route nurses to floors where they might be needed more.

The final module delivers #GenerativeAI trained on large language models from the individual hospitals. HOOBOX Robotics via @insightdottech

Customized AI Models Lead to Remarkable Results

The HOOBOX team is well aware of stringent regulations regarding the safeguarding of sensitive patient health information (PHI). Neonpass complies with American HIPAA and international protocols. In addition to encrypting data using Intel hardware, HOOBOX delivers extensive employee training “to transform everyone into a human firewall,” Pinheiro says.

Every hospital in Brazil where Neonpass is in use, have registered impressive returns on investment from the solution. For example, Albert Einstein Israelite Hospital in São Paulo reduced nursing requests by 54% and saved 100 hours per month for every 10 beds. And Santa Paula hospital in São Paulo saves an astounding 75% of nursing time using Neonpass.

HOOBOX tailors AI models for each hospital. While doing so in Brazil, the engineers ran into an interesting problem: Because different regions of the country have different dialects and slang, the models needed training on all of these to ensure that the AI solution understands patients from all backgrounds. The Intel® OpenVINO toolkit helps cut down the inferencing time needed to train such weighty models. The solution runs on Intel® Xeon® processors with integrated accelerators, which help process and deliver insights rapidly, Pinheiro says.

The company works with medical facilities to customize and deploy Neonpass for their specific use cases—from figuring out the departments that will participate in the solution, to installing QR code plates at bedsides, to training hospital-specific AI models. Most hospitals start with the nursing, nutrition, and maintenance departments before expanding the solution to include other verticals.

The Future of Healthcare AI

Using Neonpass helps patients quickly access information about procedures, exams, and tests so they can be more involved in their own treatment. “We think this is the future, delivering the most relevant patient information at the right time is otherwise a big challenge for patients,” Pinheiro says.

He also expects Neonpass to evolve to provide continuity of care beyond the medical facility. Follow-up calls to patients decrease readmission rates, but such measures are not very scalable, Pinheiro points out. While the method of care delivery will still be through the app, moving to a wearable device is also a possibility. By delivering their API to other communication platforms, Neonpass can find new avenues by which it can prioritize patient care while decreasing burdens on medical professionals.

Neonpass expects to grow its reach beyond Brazil and expand into the North American market as well. So maybe the next time I need a warm blanket at a hospital, I will no longer need to bother the attending nurse but use the Neonpass app instead.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

No-Code AI Platform Drives Mining Safety

Preventing incidents in a mining environment involves many moving parts. First there’s the sensory overload: Drilling and ore hauling operations are loud and underground operations are often dimly lit. Different kinds of vehicles move around at varying speeds, there are no traffic lights, and it’s difficult to gain a comprehensive view of the surroundings. Coupled with long hours on constantly changing shifts, the conditions are ripe for worker safety to be compromised.

Fortunately, the mining industry can address the challenge in hazardous environments, whether above or below ground, with computer vision AI solutions. Kelvin Aongola, CEO and Founder of LabelFuse, a no-code platform for machine learning and computer vision solutions, says incident prevention is a high priority in the mining industry, but there is no standardized way of approaching the challenge.

“Companies are looking for cost-effective ways to address the problem,” Aongola says. Our Advanced Driver Assistance System (ADAS)—designed for the mining and long-distance trucking sectors—essentially serves as an incident prevention platform that leverages existing CCTV cameras to capture an accurate picture of driver fatigue and working conditions on the ground.

Computer Vision AI Detects Fatigue

In the mining use case, the environment is very loud with large vehicles surrounded by smaller ones. “If you’re all the way on top as a driver, your view can be completely blocked,” Aongola says. Accident prevention involves a number of traditional methods just to keep the driver awake.

The computer vision solution captures visual cues of tiredness—droopy eyes, blinking—that might be easy for humans to miss, and sends prompts to the driver. The program also places the driver in context, understanding what’s happening in the environment around the vehicle to better predict the possibility of an adverse outcome. “We also stream these activities to a control center so if the driver has ignored all alerts, then the control center can take charge,” Aongola says. The data can also help verify insurance claims in case of incidents.

Given that the AI algorithms scan the human face for signs of fatigue and distraction, privacy concerns understandably surface. But LabelFuse follows data privacy legislation and does not store personal data on the cloud where chances of compromise might be higher, Aongola says. The company also stores only metadata on-prem for not more than a few months.

While incident prevention is the current use case in the mining industry, the LabelFuse solution is equipped to lift a bigger load, Aongola says. The system can work well with ADAS and expand to autonomous driving use cases in the future. “There are possibilities to go beyond what we’re offering with the current setup,” Aongola says.

“While incident prevention is the current use case in the #mining industry, the LabelFuse solution is equipped to lift a bigger load” – Kelvin Aogola, LabelFuse via @insightdottech

The Desire for No-Code Solutions

Companies that don’t have the right AI expertise struggle with implementation. “If you see how computer vision is deployed, especially at the edge, most companies do a small proof of concept, but they are challenged to scale it up as a production-ready solution,” Aongola says. “They either struggle with fine-tuning their models or in figuring out how to use the right edge device to deploy their ideas.”

Enterprises that want to deploy AI-driven solutions are keen to work with no-code solutions so they can focus on their primary value proposition without becoming AI-first companies. No-code solutions democratize access to software because they enable even those without specialized programming skills to develop workable solutions for problems. Pre-built components and drag-and-drop functionality enable professionals to build capabilities without getting mired deep in programming fundamentals.

LabelFuse fills this need through its no-code platform that allows domain experts to simply log in and pick a model specific to a business’s operational needs.

The Intel Advantage for Edge Computing

LabelFuse relies on Intel technology for a number of reasons, including a reasonable cost. “When you’re speaking to your client, it’s easier to close that deal because the price point doesn’t require them to go through a complicated approval process; they can make a decision right then and there,” Aongola says.

Storing data in the cloud is challenging, so high-powered edge processing helps cut costs and latency. Powered by 13th Generation Intel® Core processors, the Intel® NUC delivers all the performant compute needed. The device’s compact form factor and easy installation make it a great fit for vehicles with tight spaces. And the NUC can be placed in a ruggedized enclosure for mining’s harsh environments. The well-recognized brand name is another significant plus factor, Aongola says, as the “technology has been validated, you’re not using a no-name device to help solve a problem.”

Wider Adoption of Computer Vision AI

Although LabelFuse has found ready implementation of its incident prevention platform in mining, use cases extend beyond the sector. Any industry where worker attention might flag due to busy environments such as manufacturing, field services, or retail, can benefit from these computer vision AI solutions.

The way computer vision works is changing, Aongola says. People want solutions you can talk to, like ChatGPT equivalents for visual data. LabelFuse integrates such generative AI into edge offerings and already sees significant traction in that domain.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Bringing Digital Experiences to the Physical World

Physical workspaces—once defined by static layouts, fixed equipment, and limited adaptability—are undergoing a transformative shift. As remote work becomes more integrated, the concept of the workplace is expanding beyond physical office walls to embrace online modes, creating seamless digital experiences within hybrid environments.

This shift toward inclusivity and flexibility necessitates a reimagining of traditional workflows and communication channels. Leading this paradigm shift is Q-SYS, a division of QSC, a provider of advanced audio, video, and control systems. The company is leading the concept of “high-impact spaces,” engineered not just for their physical attributes but for their potential to enhance collaboration and productivity.

Christopher Jaynes, Senior Vice President of Software Technologies at Q-SYS, explains: “It’s all focused on the outcome of the space. Previously, we talked about spaces in terms of their physical dimensions—like a huddle room or a conference room. Today, what’s more significant is the intended impact of these collaborative spaces. High-impact spaces are designed with this goal in mind, aiming to transform how we interact and collaborate in our work environments.” (Video 1)

Video 1. Christopher Jaynes from Q-SYS explains the importance of high-impact spaces in collaborative and hybrid environments. (Source: insight.tech)

Redefining Hybrid Environments

Q-SYS has developed a sophisticated suite of technologies, including the Q-SYS VisionSuite, to create high-impact spaces that transform meeting rooms and collaborative spaces. This suite incorporates sophisticated tools like template-based configurations, biometrics, and kinesthetic sensors to significantly improve user interaction and engagement within these spaces.

Leveraging the power of AI computer vision technology, the Q-SYS VisionSuite equips these high-impact spaces with advanced control systems capable of anticipating and adapting to the needs of participants. This adaptive technology provides personalized updates and interactions, tailored to the dynamics of each meeting.

“AI in these spaces includes computer vision, real-time audio processing, sophisticated control and actuation systems, and even kinematics and robotics,” says Jaynes.

Historically, such advanced interactions were deemed too complex and prohibitively expensive within the AV industry. Outfitting a space with these technologies could ratchet expenses up by as much as $500,000. Today, AI has upended the cost calculations. “With AI control systems and generative models, we have democratized these capabilities, significantly reducing costs and making sophisticated hybrid meeting environments accessible to a broader range of users,” says Jaynes.

Technology Powering Collaborative Spaces

Audio AI plays a starring role in high-impact, collaborative spaces. AI can not only identify speakers and automatically transcribe their dialogues but also adjust the room’s acoustics depending on the type of meeting.

A standout feature of Q-SYS is its multi-zone audio capability. This ensures that clear, crisp sound reaches every participant, regardless of whether they are in a physical or hybrid environment.

The system can also enhance the meeting’s dynamics to ensure that when a remote attendee speaks from a particular direction, the sound emanates from that same location within the room. This directional audio feature creates an immersive experience, mirroring the natural flow of a face-to-face meeting and focusing attention on the speaker.

“Today, what’s more significant is the intended impact of these collaborative spaces. High-impact spaces are designed with this goal in mind.” @QSYS_AVC via @insightdottech

Additionally, as the name implies, the VisionSuite leverages advanced computer vision. Here, it offers a multi-camera director experience, which automatically controls cameras and other sensory inputs to enrich the collaborative environment. This ensures that video distribution is handled intelligently, maintaining engagement by smoothly transitioning focus between speakers and presentations.

In the meeting space, equipped with multiple cameras, the system uses proximity sensors to detect when a participant unmutes to speak. The cameras then automatically focus on the active speaker to enhance the clarity and impact of their contribution.

The system also extends out to intuitive visual cues as well. For instance, ambient room lights turn red when microphones are muted and switch to green when the microphones are active.

For added security and privacy, the cameras automatically turn away from the participants and face the walls whenever video is turned off. This ensures that privacy is maintained, reinforcing security without manual intervention.

Another element is room automation, which significantly enhances the functionality and adaptability of workspaces. AI systems can intelligently adjust lighting and temperature settings, allowing these spaces to effortlessly transform to accommodate everything from intimate brainstorming sessions to extensive presentations.

Room automation AI can even help workers manage busy schedules. “Imagine you were running late to a meeting,” suggests Jaynes. “The AI, already aware of your delay, would greet you at the door, inform you that the meeting has been in session for 10 minutes, and direct you to available seating. To further enhance your integration into the meeting, it would automatically send an email summary of what has occurred prior to your arrival, enabling you to quickly engage and contribute effectively.”

Standardized Hardware Drives Digital Experiences

To make all this possible, Q-SYS leverages the robust capabilities of Intel® processors. “Q-SYS is built on the power of Intel processing, which allows us to build flexible AV systems and leverage advanced AI algorithms,” explains Jaynes.

This strategic use of Intel processors circumvents the constraints of the specialized hardware associated with traditional AV equipment. The Q-SYS approach is heavily software-driven, allowing standardized hardware to flexibly adapt to a variety of functions—providing a longer hardware lifecycle.

“It’s exciting for us, for sure; it’s a great partnership. We align our roadmaps to ensure that we can deliver the right software updates on these platforms efficiently,” Jaynes adds.

As we move toward a future where collaborative spaces and hybrid environments are increasingly defined by their adaptability and responsiveness, Jaynes believes AI is poised to reshape the way we interact and communicate in professional settings. With solutions like Q-SYS, these interactions will be more inclusive, engaging, and effective—and, quite possibly, enjoyable.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Unlock Customer-Facing Edge AI with Workload Consolidation

The way consumers and businesses interact today has changed. “In the post-pandemic era, there is an emphasis on minimizing physical contact and streamlining customer service,” explains Jarry Chang, General Manager of Product Center at DFI, a global leader in embedded motherboards and industrial computers.

As a result, there has been growing demand for integration of edge AI applications in the retail space. For instance, AI-powered self-service kiosks and check-in solutions can help reduce physical interactions and wait times by allowing customers to complete transactions on their own. These solutions can also analyze customer behavior and preferences in real time, allowing retailers to offer personalized experiences that enhance customer satisfaction and loyalty while driving up sales.

“These requirements are driving a shift towards edge AI, where processing occurs closer to the data source, reducing latency and enhancing privacy,” says Chang. “This change is driven by the need for real-time decision-making and the growing volume of data generated at the edge.”

Spurring AI Evolution at the Edge

But the problem is that businesses often struggle to find the best approach to deploying edge AI applications around their existing infrastructure and processes.

While edge AI can dramatically reduce the load on networks and data centers, it also can create new burdens locally, where resources are already constrained. The question arises: How can edge AI be deployed without adding costs and complexity?

Workload consolidation is one way these challenges can be addressed—by enabling a single hardware platform to incorporate AI alongside other functionality. The result is multifunction edge devices “capable of running multiple concurrent workloads with limited resources through features such as resource partitioning, isolation, and remote management,” Chang explains.

DFI recently showcased the possibilities of workload consolidation at embedded world 2024 with a demo that combined an EV charger with an informational kiosk (Video 1). The kiosk element used biometrics, speech recognition, and an integrated chatbot to recommend nearby shopping and dining opportunities that drivers could enjoy while their vehicle recharges. Once the driver walks away, the screen launches into a digital signage mode, displaying enticing advertising for nearby businesses.

Video 1. DFI showcases the possibilities of workload consolidation at embedded world 2024. (Source: insight.tech)

The DFI RPS630 Industrial motherboard leverages hardware virtualization support in 13th Gen Intel® Core™ processors to seamlessly consolidate AI functions alongside a content management system, EV charger controls, and payment processing. Meanwhile, an Intel® Arc™ GPU is used to provide power- and cost-efficient acceleration for AI components.

DFI also uses the Intel® OpenVINO™ toolkit for GPU optimization to reduce its AI memory footprint, allowing it to run complex large language models in less than 6 GB of memory. Moreover, by offloading complex AI tasks at the edge to the Intel Arc GPU, DFI was able to support multiple AI workloads while simultaneously reducing response time by 66%.

“These #EdgeAI use cases will all require workload consolidation platforms to enable real-time processing of customer #data and efficient operations” – Jarry Chang, @DFI_Embedded via @insightdottech

Charging into the Future of Intelligent Systems

DFI’s workload consolidation technology extends well beyond EV charging applications. The platform integrates its industrial-grade products with software and AI solutions from partners—targeting the global self-service industry for applications in retail, healthcare, transportation, smart factory, hospitality, and beyond.

Through the integration of a hypervisor virtual machine, DFI consolidated all the client’s workloads onto a single industrial PC. This system supports diverse resourcing, enabling various OS platforms to function concurrently.

“These edge AI use cases will all require workload consolidation platforms to enable real-time processing of customer data and efficient operations,” says Chang. “And as more industries and organizations adopt the technology, we expect to see another evolution.”

“The integration of edge AI with workload consolidation platforms is crucial in the deeper development of edge computing,” he continues. “There is no doubt in my mind that as hardware, software, and other technology around edge AI continue to develop, workload consolidation will become more mainstream—ultimately unlocking the next generation of intelligent edge computing applications.”

The Value of Collaboration at the Edge

Edge AI represents an immense opportunity for many industries. Chang explains that so far, we’ve really just started to scratch the surface. By pairing efficient acceleration with the right workload consolidation platform, we can start to explore what the technology can really achieve.

DFI’s partnership with Intel gives an insight into what’s necessary to support this continued advancement: collaboration. Modern edge AI applications demand a multidisciplinary approach that combines hardware, software, AI, and industry expertise.

“Embedded virtualization requires strong partnerships in hardware and software,” explains Chang. “Developing and deploying workload consolidation technology demands significant research and development resources. By partnering with other companies such as virtual integration software vendors, we can significantly reduce both development time and time-to-market.”

“And through strong partnerships such as what DFI has with Intel, we’re able to explore and develop new technologies that help define the future of edge computing,” he concludes. “We’re proud of what we’ve achieved together so far. And we’re enthusiastic at the prospect of further collaboration with Intel on workload consolidation, AI, and a great deal more.”

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Edge AI Detects Driver Distractions, Improves Safety

Every driver knows how hard it is to keep their eyes on the road when tired—and how easy it is to become distracted by a text message, radio dial, or cup of steaming hot coffee. For professionals, who spend far more hours behind the wheel than the rest of us, staying focused while driving is even more of a challenge.

But now, emerging Advanced Driver Assistance Systems (ADAS) based on edge AI and computer vision help solve the problem of fatigued and distracted driving in ways that traditional solutions cannot. That’s good news for everyone—and a relief for fleet management, logistics, and ride-hailing businesses.

“Distracted and fatigued driving are major concerns for enterprise safety officers,” says Srini Chilukuri, Founder and CEO of TensorGo Software Pvt Ltd., a platform-as-a-service provider focused on computer vision and deep learning solutions. “ADAS solutions use edge AI to improve on older safety systems, offering real-time monitoring, analysis, and alerts to help drivers to focus.”

And while deploying AI solutions at the edge is challenging, partnerships between computer vision specialists and hardware manufacturers help get these innovative systems into commercial vehicles and on the road.

Edge AI on a Raspberry Pi

Case in point is TensorGo’s work with Intel on its Advanced Driver Attention Metrics (ADAMS) solution. The ADAS system design is elegantly straightforward: It comprises a compact camera, an edge computing device, and computer vision algorithms that monitor for risky driving.

ADAMS runs three separate AI behavioral detection algorithms concurrently:

  • Drowsiness detection analyzes the driver’s face for signs of sleepiness, such as frequent yawning or closing eyes.
  • Head pose picks up on distracted driving by identifying instances of drivers looking away from the road, such as adjusting the navigation system or reaching for a dropped item.
  • Object detection spots when a person is glancing at a distraction such as a cell phone.

If any of the algorithms detect a problem, the system immediately alerts the driver via their mobile device and then sends a second alert to a company safety official as well.

Although the basic system architecture was established in the product development phase, bringing a working version of ADAMS to market presented challenges. The proof-of-concept ran on a bulky edge device that ultimately proved too inefficient and inflexible to turn into a viable product. TensorGo’s engineers wanted to migrate their system to a compact and energy-efficient 32-bit Raspberry Pi edge device and a Raspberry Pi camera. But it wasn’t clear how it would be possible to run multiple AI algorithms on a smaller edge device without overtaxing the processor.

Working with Intel, the TensorGo team overcame their engineering challenges. They used the Intel® OpenVINO™ toolkit to optimize and accelerate the AI algorithms to run efficiently on the compact Raspberry Pi device. Intel architects also suggested a strategy of processing fewer frames of camera video data than in the original prototype. This approach provided more than enough data for high-precision computer vision analysis—while also reducing the burden on the processor, thus improving ADAMS’ overall performance and stability.

Case Study Shows Improved Safety—and Cost Savings

TensorGo’s deployment with a large trucking and delivery company with operations in the Middle East demonstrates the capabilities of ADAS systems in real-world scenarios.

The company was facing an increasing number of accidents across their fleet of more than 500 trucks—with driver distraction and fatigue being identified as the main cause. Management could not accept the safety risk to drivers and the general public. They were also concerned about operational efficiency issues due to vehicle downtime and liability costs. Despite implementing driver training programs, the problem persisted.

Working with TensorGo, the company deployed an ADAMS system in every vehicle in their fleet. Within six months, the results were conclusive—the edge AI approach was a resounding success. The company saw a 32% reduction in distraction-related incidents and a 27% decrease in fatigue-related accidents. The driver attention system had also helped improve on-time delivery rates by 18%, leading to an estimated cost savings of more than $1.5 million.

“ADAS systems like ADAMS are a game changer for enterprise safety officials,” says Chilukuri. “They improve safety outcomes and positively impact the bottom line, solving key safety challenges and helping to overcome adoption barriers.”

C2T—By combining powerful #safety and cost savings benefits, #ADAS solutions are an attractive option for #FleetManagement companies. TensorGo via @insightdottech

The Future of Transportation Safety and Beyond

By combining powerful safety and cost savings benefits, ADAS solutions are an attractive option for fleet management companies, leading to an increased uptake of these systems over the coming years.

TensorGo is preparing for this future with plans to introduce more features to its existing solution. The company is looking at ways to add a GSM module to ADAMS so that alerts can be emitted directly from the edge device rather than the driver’s phone. The engineering team is also exploring how to incorporate AI collision detection models into their solution to alert drivers to potential road hazards.

Beyond ADAS systems, the solution’s underlying technology can support other use cases. The core software and computer vision technology used in ADAMS can be adapted to applications including workplace safety, assisted living monitoring, and industrial operations.

“AI and computer vision at the edge will play a transformative role in, logistics, and other sectors over the coming years,” says Chilukuri. “Real-time monitoring and analysis will improve safety and efficiency across the board, and we aim to be a key player in that transformation.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech

Digitizing Physical Retail with Autonomous Satellite Stores

Physical retailers have long lagged behind their digital counterparts in areas like customer experience and cost efficiency. Where e-commerce can offer personalized experiences, frictionless purchasing, and automated inventory management, physical stores often struggle with inefficient manual processes as well as disconnected customer experiences.

But thanks to today’s advanced AI and computer vision capabilities, physical retail is now becoming more digital. For example, autonomous retail store solution provider Cloudpick bridges the gap between physical and digital retail with its autonomous satellite stores, offering a cashier-less micro-retail experience.
“Retailers are eager to bring their online business intelligence into the offline world, but traditional store formats with high rental costs and inflexible layouts make that very difficult,” explains Mark Perry, Cloudpick’s Head of International Business Development. “Our satellite stores let them expand sales channels at minimal risk.”

Cloudpick bridges the gap between physical and #digital #retail with its #autonomous satellite stores, offering a cashier-less micro-retail experience. @CloudpickTech via @insightdottech

Expanding Retail’s Reach with Satellite Stores

Unlike traditional brick-and-mortar stores or supermarkets, Cloudpick’s satellite stores focus on the “micro market,” meaning they are small, flexible, and movable. Therefore, these AI-powered micro-markets allow brands to tap into the burgeoning “micro-retail” or “pop-up” trend in a cost-effective manner.

These tiny stores are gaining popularity because they can be deployed in unconventional locations like corporate office lobbies, hotel entrances, and university campuses. This introduces new potential revenue streams for retailers in high-footfall areas while providing convenience to customers, according to Perry.

But finding a good location for these small stores can be a risky process. Despite their diminutive dimensions, setting up a traditional pop-up store is an expensive, time-consuming process—one that often suffers from unexpected delays and costs. Worse, if sales are disappointing, relocating the store can be difficult if not impossible. For example, traditional retail setups require lengthy leases and substantial upfront capital, make pivoting practically impossible.

The Plug-and-Play Autonomous Store

That’s where Cloudpick’s off-the-shelf model comes in. Cloudpick provisions a complete, pre-integrated hardware and software package that includes everything from the shelving infrastructure and refrigeration units to the cameras and edge AI systems. It operates as a plug-and-play solution that retailers can customize with their branding and product assortment. Everything is standardized and pre-configured, keeping customers’ total costs predictable.

Customers simply select their desired satellite store dimensions and Cloudpick handles the rest through an on-site installation team. Thanks to modular construction, a satellite store can be set up in less than eight hours and redeployed in a new location within half a day, according to Perry.

Moreover, the ability to rapidly disassemble and redeploy satellite stores reduces the risk of selecting a poor location. If a particular spot underperforms, Cloudpick can move the satellite store to another area, almost like relocating a food truck.

This unique flexibility allows retailers to experiment with locations in a low-risk manner while capitalizing on emerging customer micro-markets and high-traffic zones.

There’s a strategic benefit this format can bring to the traditional retailers. Not only for convenience store operators but large franchises like Walmart or Les Mousquetaires, that want to penetrate new markets and create brand awareness in urban areas.

The Cloudpick solution’s pre-configured format is built around on standardization, which allows both new market entrants and existing retailers to capture previously untouched locations. “An example of an existing convenience chain playing this game is Zabka in Poland, which has rapidly launched 60 Nano stores within a course of two years,” says Perry. The retailer aims to rapidly roll-out their stores in high-traffic urban locations. This additional density of stores within a small radius area creates more effective supply chain management.

AI Delivers an Enjoyable, Cost-Effective Consumer Experience

Once deployed, these satellite, micro-retail stores provide an AI-powered user experience. Customers can enter the store, scan a QR code, or swipe a card. While they shop, Cloudpick keeps track of the items they’ve picked up, using a combination of cameras and weight sensors in the shelves.

Perry explains that this multimodal sensing approach increases accuracy and can determine whether a customer picked up three candy bars or just one. Additionally, it gives retailers virtually unlimited flexibility in the stock they can carry, allowing shoppers to enjoy a broad selection of items that can be easily updated to keep up with their shopping preferences.

To check out, the customer simply walks out of the store—no cashier required, and no need to scan items. This is possible through Cloudpick’s AI system, which processes unified data to map product movements and ownership to specific customers, automatically checking out those individuals through an app as they exit.

With built-in mechanisms for coping with occlusions, crowd detection, and multi-camera syncing, Perry says Cloudpick’s satellite stores maintain a 98.5% accuracy rate for checkout recognition and billing despite the complexity of the autonomous shopping experience.

Maximizing ROI for a Satellite Store

By providing a cashier-less experience, retailers need only to hire staff to visit the store and resupply stock. These on-site visits can be optimized by a smart inventory management system that helps minimize product waste, overstock, and out-of-stock situations.

The computer vision and AI back end also analyze shopping patterns, demographic details like age and gender, and customer traffic flows. This provides retailers with data insights similar to online retail’s user analytics and remarketing capabilities—but in physical locations.

The platform is designed to bring the data-driven profiling and marketing precision of e-commerce into brick-and-mortar retail. “Retailers can integrate our APIs to optimize product assortments, layouts, pricing strategies, and promotions based on real-world shopper behaviors,” says Perry.

All of this is made possible by Intel technology. Perry explains that high-performance, power-efficient Intel® processors are key to running Cloudpick’s computer vision models for object recognition, customer tracking, and checkout automation. What’s more, tools like the Intel® Distribution of OpenVINO™ toolkit enable Cloudpick to constantly evolve its offerings.

The Future of AI-powered Satellite Stores

Between autonomous operations, data-driven inventory optimization, and minimal real estate footprint, Cloudpick’s satellite stores provide retailers with an affordable, future-proof roadmap to micro-retail. Future integrations could include interactive digital signage for personalized promotions and immersive product storytelling, Perry envisions.

Satellite stores are just the beginning of the AI transformation of how we buy in the real world in the retail industry.

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Edge AI: Addressing Industrial Cybersecurity Challenges

Cyber threats in the industrial sector are a growing problem—and there are no quick fixes.

Several factors contribute to this challenge. The rise of the Industrial Internet of Things (IIoT) has connected all kinds of manufacturing equipment, control systems, and sensors to the network for the first time—greatly expanding the attack surface available to malicious actors. In addition, operational technology (OT) assets often rely on proprietary data transfer protocols and unpatched legacy operating systems, making them harder to secure than standard IT systems. And, like businesses in almost every other sector, manufacturers face a shortage of skilled security personnel, making it difficult for their IT and cybersecurity teams to cope with the increasing volume of threats.

In this difficult landscape, manufacturers require innovative solutions to address their ongoing OT security issues—and the application of artificial intelligence (AI) shows promise. But AI-based solutions can be challenging in themselves to implement in industrial settings.

“To apply AI effectively to industrial cybersecurity, you need high-performance edge computing capabilities to manage the intensive inferencing workloads,” says Tiana Shao, Product Marketing at AEWIN Technologies, a networking and edge computing provider with a wide range of solutions for the industrial sector. “Industrial environments also have unusually demanding requirements for scalability, flexibility, and ruggedness.”

The good news for the sector is that companies like AEWIN have now begun to offer edge hardware appliances that make it far easier for system integrators (SIs) and manufacturers to deploy AI-enabled cybersecurity solutions in factories. Based on next-generation processors and advanced software technologies, these solutions help security teams wield AI more effectively in the fight against cyber threat actors.

Beyond Automation: AI in Industrial Cybersecurity

While AI is not a “magic bullet” for industrial cybersecurity, it does introduce a new element to cybersecurity solutions: the ability to learn.

“AI in cybersecurity goes beyond mere security automation, because over time it can develop an understanding of what constitutes ‘normal’ user behavior and network activity,” says Shao. “AI can be used to analyze massive data sets in order to identify trends, flag risks, and detect anomalous events more effectively.”

That unique capability offers security teams some significant advantages. It gives them a better chance of detecting certain kinds of malicious activity that a legacy approach might miss. Establishing a baseline of “normal” activity also makes it possible to reduce the number of time-consuming false positive alerts.

“To apply #AI effectively to industrial #cybersecurity, you need high-performance #edge computing capabilities to manage the intensive inferencing workloads.” – Tiana Shao, @IPC_aewin via @insightdottech

Perhaps most important, through the methodology of searching for threats by identifying deviations from expected behavior—rather than by relying solely on rule-based approaches that attempt to match system activity or files to known threats—AI-assisted security tools can help security teams detect new and emerging cyber threats with greater accuracy.

Industrial Cybersecurity: It Takes a Team

AEWIN’s experience with an OT system integrator in the United States is a good demonstration of this.

The SI wanted to offer manufacturers a better way to detect sophisticated cybercriminal activity and speed response times, but this was difficult to accomplish using traditional methodologies. Newer threats, especially those that work by abusing or mimicking legitimate system operations, were simply getting lost in the “noise” of routine system activity, and thus overlooked.

Working with AEWIN, the SI developed a security solution that leveraged AI to analyze system behavior and learn what constituted “normal” so that deviations could be spotted more easily. The SI also used AI to help orchestrate the response across multiple controls and integrate new threat intelligence dynamically to improve defenses.

The result was an enhanced cybersecurity solution that could learn from historical data, identify patterns of activity, and detect cyberattacks that were being missed by traditional tools—while also responding to threats more quickly and becoming even more effective over time.

AEWIN’s experience highlights the benefits of partnerships between cybersecurity specialists and hardware providers—a phenomenon mirrored by AEWIN’s own experience with Intel as a technology partner.

In developing its SCB-1942 edge hardware appliance, the company worked with Intel to develop a powerful, flexible computing platform capable of handling the rigorous demands of AI in industrial cybersecurity. The device was constructed atop Intel® Xeon® Scalable processors, which offer up to 64 CPU cores and increased PCIe lanes for greater expandability.

The underlying hardware is further augmented by Intel’s range of AI accelerators. This includes Intel® Advanced Matrix Extension (Intel® AMX), which improves deep-learning training and inferencing, and Intel® Advanced Vector Extensions 512 (Intel® AVX-512), a set of new instructions that help boost the performance of the machine learning workloads used for intelligent cyber threat detection.

“Our relationship with Intel gave us extensive technical support and early access to advanced processors, helping us bring a scalable, high-performance edge computing solution to market faster,” says Shao. “Intel processors deliver remarkable performance and can meet the demanding workloads required to use AI to analyze network traffic in real time, perform deep packet inspection, and apply security policies automatically.”

A Future Toward Secure Digital Transformation in Manufacturing

As more and more manufacturers embrace digital transformation, it is expected that there will be an increase in cyber threats in industry—and that cybercriminals will develop new attacks as well. Luckily, AI can help skilled security practitioners respond to evolving threats more quickly and effectively than ever before—while purpose-built hardware appliances can help security teams deploy their AI tools in manufacturing settings more easily.

“We believe that the use of AI in industrial cybersecurity is only going to increase in the coming years,” says Shao. “Our mission is to support our customers by providing reliable, scalable, cutting-edge systems for this fast-growing market.”

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Secure Access Service Edge Protects the Network Edge

Enterprises can’t protect their assets if they don’t know what or where they are. This problem is becoming even more pressing with the growing number of IoT devices. When devices connect to an enterprise’s network, it is hard to tell the good ones from the bad and quickly sort out authorized users from intruders.

Fortunately, companies increasingly understand the importance of the network edge. Realizing that the point of entry can be varied—from an industrial IoT-based sensor to an employee’s mobile phone—they sift through points of contact to classify and fingerprint all devices trying to gain access.

Devices need to be identified and classified by type, risk, and sanctioned versus unsanctioned. “Sanctioned devices must pass through risk and security posture assessments,” says Dogu Narin, Vice President of Versa, a leading Secure Access Service Edge (SASE) provider. “Such a slice-and-dice methodology of granting access simplifies security while also keeping it agile.”

Unified Platform for Secure Access Service Edge (SASE)

“The SASE framework for data security accounts for the way we work today, especially with the growth of SaaS programs resulting in the “cloudification” of everything,” Narin says. “Whether you’re working from home, the office, or traveling, you should be able to use the networking and security functions in a constant way and as a service, which is the primary driver for SASE.”

Too often, checking for security robustness involves a piecemeal approach with separate operating systems for SD-WAN products, firewalls, switches, routers, and more. In many cases, these functionalities are separate and work in isolation. “It’s like needing to speak multiple languages. If one moment you need to speak English, another moment German, French, Spanish…it can get pretty complicated,” Narin says.

Worse, a lack of industry standards for device classification makes the problem even more challenging. A firewall device might label something as a social media application, whereas an SD-WAN device might find it to be something else. Such complications mean security protocols must be repeated over and over again, leading to bottlenecks in network traffic.

The Versa Universal SASE Platform stands on the SASE framework and consolidates multiple security and networking capabilities like fingerprinting, classification, risk assessment, and security posture assessment into a single solution.

Because the Versa SASE solution natively supports all protocols, it provides key advantages, among other things, single-pass packet processing for decreased latency and complexity. “With the Versa OS, all the protocols and device policies are baked in and popular IoT protocols are recognized,” Narin says.

The network administrator can focus on setting and applying policies to devices instead of having to start from scratch in identifying every entry point into the network. And administrators can carry over the Versa software to different environments. “You can deploy across the network and use only one language, one classification method, one policy engine, and one management console to achieve what you want to achieve,” Narin says.

AI in the SASE Framework

The glut of data flowing into enterprise systems makes infosec especially suited to AI. Versa uses AI to isolate sophisticated day zero malware attacks, where threat actors take advantage of vulnerabilities before developers have had a chance to identify and address them. Its malware analysis and detection mechanisms scan for data leakage to ensure that sensitive data does not get routed to the cloud.

AI is also useful for User and Entity Behavior Analytics (UEBA), which develops a baseline for an individual’s or application’s data usage to find behavioral anomalies. When IoT devices come into play, threat actors can masquerade themselves by taking on different identities or have unauthorized IoT sensors talking to one another. “AI helps us find these base patterns in mountains of data,” Narin says.

“You can deploy across the #network and use only one language, one classification method, one policy engine, and one #management console to achieve what you want to achieve” — Dogu Narin, @versanetworks via @insightdottech

Underlying Tech and Partnership

Versa uses processors and hardware offload engines from leading chip vendors. Its software is based on Intel’s open source DPDK (data plane development kit) for optimization of data packet processing.

“DPDK technology uses different low-level and pattern-matching libraries and other software functions to accelerate processing of security and packet forwarding to extract maximum processing power and achieve lowest latency on a given hardware platform, like a branch appliance or data center device. It enables us, to onboard and offer new appliances in a fast way without per appliance custom software development,” Narin says. “And we also use Intel’s high-level software libraries for a variety of different reasons including regex or other pattern matching purposes. It’s a broad scope of partnership and leverage between the two companies.”

Versa leverages the “force multiplier” effect that service providers deliver to scale their base of customers. A good partner network with companies that understand the sophisticated technologies that Versa delivers has been a key go-to-market strategy.

The Evolution of Data Security

As adoption of the cloud increases, and with the growing use of proprietary generative-AI models, Narin expects data sovereignty to play a greater role in data security.

“You’re going to see wider use of AI-based solutions, whether it’s in the detection of problems, analyzing large data, or how we apply tools and systems,” Narin says.

Operating and deploying networks are becoming more complex, and hackers also use AI to increase the sophistication of their attacks. In turn, the infosec community will respond by developing more complex mechanisms to detect and eliminate AI-originated attacks.

The future is about improving the customer experience, which demands a solution that interconnects applications and data through a “traffic engineered cloud fabric” for seamless quality without congestion. Such a fabric runs across the globe and connects SASE gateways to sites and users and cloud-based applications. It’s the best of both worlds: SASE-based security and a stellar user experience.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

AI in Radiology Transforms Cancer Diagnostics

Radiologists spend an inordinate amount of time looking at scans to diagnose conditions, but pioneering solutions are ushering in a new frontier in cancer diagnostic imaging. AI in radiology is emerging at a particularly critical time as health systems face a shortage of radiologists, leading to higher workloads that increase the risk of errors.

The growing volume of scans that doctors must interpret only compounds these challenges. One study demonstrates the risks this shortfall poses as scans grow in volume: When doctors have 50% less time to read radiology exams, the error rate goes up 16%.

Siemens Healthineers, a leading innovator in the healthcare tech industry, developed its AI-Rad Companion platform to increase diagnostic accuracy and reduce operational burdens for radiologists. The solution demonstrates the impact AI can have across the healthcare continuum, and how this transformative technology can serve as a second set of eyes and ears for doctors to support better healthcare outcomes.

The company uses AI-powered, cloud-based augmented workflows to optimize repetitive tasks for radiologists. AI-Rad Companion leverages deep-learning algorithms to deliver insights that support clinical decision-making—acting as an assistant to help radiologists make a more accurate diagnosis.

Harnessing the Full Power of AI in Radiology

Though it will take time to address workforce shortages in radiology, AI can help close this gap, says Ivo Driesser, global marketing manager for artificial intelligence at Siemens Healthineers.

“That’s why we said at Siemens Healthineers, ‘Why don’t we start using AI to take away the burden for radiologists of repetitive tasks like measurements of lesions, the time-consuming process of looking for lesions in the lung for cancer or measuring the amount of calcification in the heart. All these manual steps that doctors are doing can more easily be done by AI,” Driesser says.

AI-Rad Companion is designed to balance #automation and accuracy for doctors, while offering powerful decision support. @SiemensHealth via @insightdottech

AI-Rad Companion is designed to balance automation and accuracy for doctors, while offering powerful decision support. The solution isn’t at all obstructive. AI-Rad Companion seamlessly integrates into radiologists’ standard workflow, connecting to a hospital’s existing system virtually via the cloud or physically using an edge device. The solution—powered by Intel® Core processors and the Intel® OpenVINO toolkit—deploys deep-learning models that improve image recognition and processes anonymized DICOM data from CT devices. It then uses AI-driven algorithms to surface clinical insights for radiologists. AI-Rad Companion highlights lesions on medical images, streamlines the measurement of lesions to save doctors time, and in some cases, helps radiologists uncover secondary conditions or pathologies the naked eye may have missed.

“We cannot say, ‘This patient has lung cancer and needs that treatment.’ It’s always a doctor who needs to do this, but we can guide the eyes of the radiologist,” Driesser says. 

Modernizing Diagnostic Imaging Delivers Better Outcomes

AI-Rad Companion has five powerful extensions that involve interpreting images from chest CTs, chest X-rays and brain scans, aiding prostate assessments, and organ contouring for radiation therapy planning.

With the heart and large vessels, for example, AI-Rad Companion Chest CT can help doctors measure the diameter of the aorta. Using clinical guidelines, the tool then can alert doctors if there’s an abnormality on the scan that warrants further investigation. For chest CTs, AI-Rad Companion examines lung lesions and delivers AI-enhanced results next to standard CT data to help doctors diagnose conditions such as emphysema and lung cancer.

Some healthcare providers use AI-Rad Companion to increase their efficiency and diagnostic accuracy. Diagnostikum Linz, a radiology and imaging clinic in Austria, has leveraged the solution for chest CTs. AI-Rad Companion Chest CT is embedded within the image value chain. It applies deep-learning algorithms to DICOM data to calculate results that are then pushed to the radiologist’s reading environment for interpretation. The solution also has specific deep-learning algorithms that healthcare institutions can use for aorta assessments, so patients who need to undergo both heart and chest examinations can do so at one time.

AI-Rad Companion offers powerful 3D images and visualizations to advance the diagnostic process and reduce manual work for radiologists. With the solution’s AI-enhanced workflows, radiologists at Diagnostikum Linz have increased their efficiency by 50%, since it now takes fewer mouse clicks to access and interpret scans. They no longer have to manually measure lesions. The AI-enabled method used to calculate the diameter of lesions is the same every time, which not only saves time but also facilitates standardization that drives greater accuracy.

The Medical University of South Carolina (MUSC) has also used AI-Rad Companion Chest CT to reduce interpretation times for scans by 22%. MUSC has increased provider efficiency thanks to the solution’s AI-enhanced, post-processing, automated quantification of structures in the chest, and automated segmentation of the heart and coronary arteries. Having AI at the fingertips of radiologists allows for faster outcomes.

The Future of AI in Radiology

Radiologists are dedicated to giving patients the answers they need. Their work informs subsequent treatment, potentially enabling health systems to save more lives and deliver better outcomes. They currently grapple with manual processes that slow down interpretation times, but AI can help them optimize their workflow without compromising accuracy.

AI-Rad Companion demonstrates how AI can be a powerful enabler for healthcare providers, serving as an attuned clinical assistant rather than the final decision-maker in the diagnostic imaging process. In this way, AI-Rad Companion allows radiologists to focus less on tedious tasks, and instead use their deep clinical knowledge to drive impact where it matters most—delivering the best possible patient care.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.