Spatial Intelligence + AI = New Solutions for Retail VARs

As a graduate student at MIT, George Shaw worked on the Human Speechome Project—a study of how infants learn languages. The research team discovered that the approach to study language acquisition could also be used to understand peoples’ behaviors.

“Even though from what we know it’s mathematically impossible, just about every child learns to speak. Clearly there was a gap in our knowledge, and the goal of the Human Speechome Project was to begin to fill that gap,” says Shaw, founder and CEO of Pathr.ai, a provider of retail spatial intelligence solutions. Who knew that his research would lead Shaw to helping retailers better understand their customers.

Connecting the Dots with Retail Technology

By studying a baby’s babble, Shaw discovered that regularities and interactions in an environment could create valuable retail industry analytics. The Pathr.ai Spatial Intelligence solution uses machine learning models to track the movement of people inside stores. Spatial intelligence is a cognitive layer that sits on top of AI. “It provides higher-level reasoning and is the business intelligence layer that says, ‘Here’s what this tracking actually means,’” says Shaw.

Pathr.ai’s solution leverages existing video cameras, with devices fed into a local server. The camera’s learning model is designed to detect people anonymously, producing dots moving around a map, and those dots move into the Pathr.ai Behavior Engine.

“It’s where the playbook lives,” says Shaw. “We extract business intelligence from the movement of those dots in real time to make decisions.”

Since AI is run on the local server, Pathr.ai’s solution requires the most compute horsepower. “We’re able to run in various environments, but the most efficient and cost-effective is with systems built on Intel processors and the Intel® OpenVINO toolkit for our computer vision,” says Shaw. “With Intel, we have the best technical solution.”

Pather.ai also relies on Global Solutions Distributor BlueStar, Inc., to bring its Spatial Intelligence solution to market through VARs and SIs specializing in the retail space. BlueStar works with an ecosystem of hardware, software, and other providers to build edge-to-cloud solutions designed to help retailers streamline operations and grow profitable sales. In addition to offering ready-to-deploy retail solutions, BlueStar backs them with service, support, logistics, and technology expertise.

Retail Analytics Address Today’s Challenges

Tracking dots allows Pathr.ai to address retailers’ biggest pain points. Lower foot traffic in stores due to an increase in eCommerce makes customers who enter a location more valuable, but staffing shortages can make it challenging to properly serve them.

“We’re able to optimize each customer’s experience,” says Shaw. “If they have a more enjoyable shopping experience, they may buy more things. And we can make more efficient use of each staff member’s time and ultimately require fewer staff hours, which have become scarcer and more expensive.”

For example, a jewelry counter inside a store may see only 10 customers a day. Instead of dedicating an employee to service a small percentage of customers, you can task the person with other work. When a customer needs help at the jewelry counter, Pathr.ai detects them and sends a notification to the employee.

“It’s zone defense instead of a person-to-person coverage,” says Shaw. “It’s a more efficient use of the people you have available through dynamic staff allocation.”

A major grocery store chain in the United States uses the solution’s real-time data to measure queue lengths and adjust the number of open checkouts. “In a grocery store, the checkout experience is a huge part of how grocers differentiate themselves,” says Shaw. “Many of them have similar products, store layouts, and promotions. So the experience you have at the checkout matters a lot.”

Pathr.ai measures queue lengths and understands wait times. The system can make a prediction on how long a customer with a full cart of groceries will wait. If the expected wait time goes above a threshold set by the grocer, it will notify a staff member to open another checkout. If all the checkouts are open, the grocer will open more self-checkouts.

When expanding its smaller deployments into much larger deployments, like a shopping mall, Pathr.ai had to deal with the unique processing, deployment, and infrastructure challenges these layouts presented.

The Future of Retail Technology Is Driven by Consumers

Going even further, Shaw sees spatial intelligence eventually being used to help retailers understand other pressing issues, such as shoplifting. “Discerning that behavior in real time in an anonymous and unbiased way can help end shoplifting,” he says. “That’s a benefit not just to our enterprise customers but to all of society.”

Meanwhile, retailers need an ability to understand changing consumer expectations and act on them to stay relevant, says Shaw. “It’s up to technology providers and retailers to align with consumer desires and expectations,” he says. “We need a way to understand what consumers want when they come into a physical location and act on that. To understand behavior, we need more and better data.”

 


About BlueStar

BlueStar is the leading global distributor of solutions-based Digital Identification, Mobility, Point-of-Sale, RFID, IoT, AI, AR, M2M, Digital Signage, Networking, Blockchain, and Security technology solutions. BlueStar works exclusively with Value-Added Resellers (VARs) to provide complete solutions, custom configuration offerings, business development, and marketing support. The company brings unequaled expertise to the market, offers award-winning technical support, and is an authorized service center for a growing number of manufacturers. BlueStar is the exclusive distributor for the In-a-Box® Solutions Series, delivering hardware, software, and critical accessories all in one bundle with technology solutions across all verticals, as well as BlueStar’s Hybrid SaaS finance program to provide OPEX/subscription services for hardware, software, and service bundles. For more information, please contact BlueStar at 1-800-354-9776 or visit www.bluestarinc.com.

Interactive Touchscreens Offer New Possibilities

Businesses around the world and across every industry have a lot in common. They want to streamline operations, lower costs, increase their competitiveness, and grow revenue. Remarkably, interactive touchscreens that incorporate next-generation technologies like computer vision, voice recognition, and AI play a key role.

From shopping centers to hospitals to the factory floor and beyond, the versatility of interactive touchscreens brings new opportunities—both to these organizations and the system integrators and ISVs that serve them. And when leading touchscreen manufacturers like Elo Touch Solutions partner with Global Solutions Distributor BlueStar, Inc., SIs and ISVs can get to market faster with the integrated systems their customers need. BlueStar takes it one step further with custom solutions—backed by service, support, logistics, and marketing.

Elo Touch in Action on the Shop Floor

In industries with traditional workflows, interactive touchscreens can offer significant opportunities for optimization.

This was the experience of the manufacturer Magnum Piering. The company has been very successful—and its success means taking on larger orders. But in a production facility where space was already limited, that wasn’t easy.

Its operations had a standard machining process, but not optimized for space savings. Design and configuration were handled in one area of the building, and then loaded into a machine on the shop floor. The company wanted to consolidate these functions into a single system so its machinists could do everything in one place.

To accomplish this, it invested in a Hypertherm plasma cutting system, which can be controlled by direct human interaction. But in a harsh manufacturing environment, it comes with safety risks. There are sparks, dust, and metal fragments in the air, and workers wear heavy protective gloves. To solve these problems, Elo Touch deployed open-frame interactive touchscreens as the input device for the Hypertherm machines. Since the platform allows for a high degree of customization, the company’s engineers were able to optimize the device drivers and input settings so the system could be used with gloves on.

The result is a turnkey solution that dramatically improves Magnum Piering’s machining workflow. With more than 6,000 touchscreens installed throughout its facilities, the company can process orders of up to 500 tons—more than double its previous capacity.

Self-Service Shines at the Cinema

Touchscreens also help businesses respond to changing customer preferences. The Kerasotes Showplace Icon Theatre is a prime example. Kerasotes Theatres has been in business since 1909, but like many companies in the hospitality and entertainment sector today, it’s having to adapt to new customer expectations.

“In the ‘new normal,’ customers are demanding convenience above all else,” says Rick Smith, Director of Business Development at Elo Touch. “There’s a lot of research about what makes customers decide to remain at a site and make a purchase or not. Overwhelmingly, we’re now seeing that customers will leave if the ordering process becomes inconvenient to them.”

At the movies, this means that if customers are running late, or if it’s a busy night, they may decide to skip the candy and popcorn altogether: an unacceptable loss for a business that makes so much of its profits from concessions.

Working with Kerasotes, Elo Touch set up self-service ticketing and concession kiosks based on their all-in-one I-Series interactive touchscreens. Now moviegoers enter the theater and buy tickets and concessions from one of the many available kiosks in the lobby—each unit capable of serving up to 350 customers per day.

The result has been an improvement across the board. Wait times are down and customer satisfaction is up—as are concession sales.

Modular Design: The Key to Versatility

It’s pretty clear why businesses want to use touchscreens. But the main reason they can be deployed in so many different settings is their inherently modular design.

To begin with, the touchscreens are diverse. The Elo Touch solution, for example, ranges from handheld models with the form factor of a smartphone all the way up to large-format models with 65-inch displays.

A touchscreen unit with Elo Edge Connect built in can be customized with peripherals: barcode scanners, webcams, NFC and RFID readers, status lights, biometric scanners, and so on. Touchscreen providers also enhance this basic modularity with compute devices to help with controlling complex solutions. Elo Touch, for example, offers EloPOS Pack, a controller for point-of-sale applications that supports custom configurations involving up to 15 different peripherals.

Intel technology makes this modular design possible: “Intel processors give us an extraordinary, flexible, and stable platform for building custom touchscreen solutions,” says Smith. “They provide the high performance required for intensive workloads for advanced applications.”

The Future of Interactive Touchscreens

The evolution of interactive touchscreens that Smith refers to is inevitable. One clear opportunity for transformation lies in self-service applications currently handled through touchscreens alone. Computer vision, for instance, could be used to identify products that a customer intends to purchase and speed up self-checkouts.

As Smith sees it, this integration of technologies will be a good thing for both businesses and customers: “People want to make their own choices. Your customers want to choose their own journey. The perfect system will be the one that offers touch, voice recognition, computer vision, and AI in combination—and lets the user drive the interaction.”

In a sense, then, the changes coming to interactive touchscreens are really a way of fulfilling their original purpose: to put control back in the customer’s hands.


About BlueStar

BlueStar is the leading global distributor of solutions-based Digital Identification, Mobility, Point-of-Sale, RFID, IoT, AI, AR, M2M, Digital Signage, Networking, Blockchain, and Security technology solutions. BlueStar works exclusively with Value-Added Resellers (VARs) to provide complete solutions, custom configuration offerings, business development, and marketing support. The company brings unequaled expertise to the market, offers award-winning technical support, and is an authorized service center for a growing number of manufacturers. BlueStar is the exclusive distributor for the In-a-Box® Solutions Series, delivering hardware, software, and critical accessories all in one bundle with technology solutions across all verticals, as well as BlueStar’s Hybrid SaaS finance program to provide OPEX/subscription services for hardware, software, and service bundles. For more information, please contact BlueStar at 1-800-354-9776 or visit www.bluestarinc.com.

Industrial Edge Computing: Strategies That Scale

Edge computing is quickly becoming known for its ability to improve business operations by reducing latency, delivering high performance, and providing real-time insights. But despite its promise, many businesses—especially in the industrial space—struggle to successfully adopt edge computing.

One challenge is that industrial edge computing can be complex to scale. While it is relatively easy to get started with a few edge devices, managing a large-scale edge computing infrastructure can be daunting. Additionally, lack of a single edge computing standard can make it difficult to integrate edge devices and applications from different vendors.

To overcome these challenges, businesses need to take a strategic approach to edge computing adoption. This includes carefully planning their edge computing architecture, selecting the right edge devices and applications, and implementing a robust management and orchestration framework.

In this podcast, we discuss the state of edge computing across different environments, how businesses can successfully approach edge computing, key challenges and how to solve them, and what the future holds for edge computing in this space.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guests: CCS Insight and Intel

Our guests this episode are Martin Garner, Head of IoT Research at CCS Insight, and Dan Rodriguez, Vice President and General Manager of the Network and Edge Solutions Group at Intel.

In his role, Martin focuses on industrial IoT use cases, and recently helped put together the IoT Initiatives to Scale Industrial Edge Computing research paper.

In his more than 25 years at Intel, Dan has spent a majority of his time in the network and edge space helping to lead various industry transformations.

Podcast Topics

Martin and Dan answer our questions about:

  • (2:49) The state of edge computing
  • (7:04) Different edge opportunities for businesses
  • (11:14) Why adoption is more challenging for manufacturers
  • (15:02) How to successfully approach edge computing
  • (20:52) Lessons learned from industry examples
  • (24:08) The edge computing ecosystem and partnerships
  • (26:36) Future proofing investments and efforts
  • (28:00) What’s next for edge computing

Related Content

To learn more about adopting edge computing, read IoT Initiatives to Scale Industrial Edge Computing. For the latest innovations from CCS Insight and Intel, follow them on Twitter @ccsinsight and @intel, and on LinkedIn at CCS Insight and Intel Corporation.

Transcript

Christina Cardoza: Hello, and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re going to be talking about the state of edge computing with Martin Garner from CCS Insight and Dan Rodriguez from Intel. But before we jump into the conversation, let’s get to know our guests a bit more.

Martin, welcome back to the show. For those audience members who haven’t listened to any of the other great episodes you’ve been on, please tell us more about yourself and what you do at CCS Insight.

Martin Garner: Well, thank you, Christina. So, I’m Head of IoT Research at CCS Insight, and we’re one of the leading analyst firms in the tech sector. And my focus is mostly on the industrial use of IoT across quite a range of sectors and technologies and things. And I’ve worked with Intel and Dan for quite a long time, including doing a report on edge computing, which is an accompaniment to this podcast.

Christina Cardoza: Yeah, absolutely. Excited to dig more into that report that you guys did. But before we get into it, Dan, please tell us more about yourself and what you do at Intel.

Dan Rodriguez: So, first it’s truly great to be here with both you, Christina, and Martin. Really excited about this conversation and talking about the future of edge computing. And in my role at Intel I am the VP and General Manager of the Network and Edge Solutions Group. And I’ve been at Intel for over 25 years, and during that time I spent most of my time working in both a network and edge space.

And in that 25 years, I will say the bulk of that time was within the telecommunications industry, but I also spent some time in other industry sectors. And I will say that it’s been very exciting for me to see many different industry transformations through the years and have the opportunity to truly participate and help lead the shift to NFE in telecommunications. And with that we’re starting to see additional shifts to more of a software-defined infrastructure in areas like manufacturing. And I know that we’re going to spend a bit of time today talking about that.

Christina Cardoza: Yeah, absolutely. And it’s great that you guys have such an in-depth knowledge and background in edge computing, since that’s exactly what the conversation is going to be around today. I think by now edge computing is pretty well known in the industry for being able to bring computation and data closer to where it’s generated, which allows for the real-time insights, high performance, and low latency that the businesses really need to succeed in today’s modern world.

So, we know what it is, but companies and organizations don’t really always know how to get there. So that’s exactly where I want to start off this conversation today. And Dan, I’ll start with you—just, with your vast knowledge in this space, 25 years at Intel, where do you see edge computing today? What are the trends and challenges you’re seeing, and what has been the uptake in its adoption recently?

Dan Rodriguez: Yeah, I mean, first off, we are truly seeing how both edge compute, as well as use of AI, is really driving an incredible amount of change across all sorts of industries and really fueling digital transformation. And when you just take a step back and you think about it, digital transformation is really all around us as companies are looking to truly automate their infrastructure to improve everything from operational efficiencies to enabling new operating models, but also provide them new monetization opportunities.

So when you think through this, it’s really about how companies are looking to save money or manage their TCO, but also make money of course. And with the advent of AI plus the advent of 5G, I believe those two things will only accelerate this trend. And if you think about just one industry for a second—manufacturing—we’re already seeing customers start their journey on AI. And where they start their journey, it shouldn’t be too surprising, because obviously when you’re running a manufacturing plant you want to take—you got to take—measured risks.

So when they’re thinking about AI to start, they’re doing simple things like utilizing it for supply chain management by maybe having autonomous robots help them stock or pull inventory. But I will say they’re quickly advancing, and they’re looking at how to utilize computer vision and use AI to assist with things like defect detection to help them with their overall product-quality assurance.

Christina Cardoza: Yeah, it’s amazing to talk about digital transformation. We’ve been talking about it for quite a bit, but it keeps changing. How you digitally transform, like you said, it’s a journey, and first it started with the cloud and AI and now with edge computing. So it’s interesting to really learn how companies are really taking hold of their digital transformations, and the new and exciting technologies they can always leverage.

Martin, you mentioned in your intro that CCS Insight just did a report on the state of edge computing, and of course that’s available on insight.tech—we’ll link that out for any of the listeners that want to learn more about that. But I’m wondering, from that report, what you’ve learned about edge computing: where we are, what the challenges are, so to speak.

Martin Garner: Well, yeah, thank you, Christina. I think the first thing I’d say is that, as Dan has already said, there’s a huge amount of it already out existing across all industries, including quite a lot that you might not even think of as edge computing. Things like industrial controllers and what have you are increasingly inside what we now think of as edge computing. And that highlights a couple of things about the whole edge computing space.

So, one is that it’s very broad: it runs from sensors all the way out to local-area data centers. It’s also quite deep: it goes from the infrastructure at the bottom up through the networking, through applications to AI—which Dan has already mentioned and we’ll come back to. And because of those two things, it’s quite a complicated area. There are lots and lots of technology areas, lots of individual technologies within that, and quite a lot of change on the supply side.

Most of those are good changes, making it easier to use and more manageable. So there’s quite a lot of progress there. In adoption terms, we think there are three big drivers. One is IoT. It has always been one of the big drivers; it still is. And what we’re finding there is that people are generating such high volumes of new data that they need to analyze on this—analyze and kind of deal with this in near real time using analytics, machine learning, or AI.

Recently though, telecoms guys have become very interested as a supplier into edge computing with multi-access edge computing and private networks. And, lastly, the economic climate—we’re all in it at the moment. Many companies are kind of reviewing their cloud spend, and that is a bit of a spur to do more with edge computing, because although they’re reviewing their cloud spend they’re still generating more data; they still want to do more with it and edge computing helps with that.

Christina Cardoza: Great. I want to dig into something that you both mentioned: how edge computing—it’s really spanning across all these different industries. Every business can take advantage of edge computing to be more successful in their operations and their business. So, Dan, I’m wondering if you can talk a little bit deeper into what those opportunities are for businesses across industries.

Dan Rodriguez: Absolutely, Christina. But maybe before I dive in to specific examples in industries, I’d like to just come back to what I mentioned earlier around making money and saving money. Because I will say, as a general manager, that’s what I think a lot about when I drive my own business, but also as I approach customers. That’s how I think about those two angles when I help them solve different problems or different challenges.

So, first, when you think about how to save money—companies, they want to have more control. They want to find ways to optimize their operations, their costs, their data. And we think about this current environment we’re in, we’re all—we’re seeing lots of macroeconomic challenges. It’s very volatile out there. You’re seeing supply chain challenges, you’re seeing unstable energy production, as well as there’s just challenges and sometimes labor force shortages today. So, lots of opportunity here.

And then when companies think about making money, of course edge AI can help here too. And you think about computer vision for a second here—it can provide all sorts of valuable insights and help improve the overall customer’s experience, as well as help ensure that stores can do many different great things, including even helping them with their merchandising strategy.

So let’s talk a little bit more about retail. And when you think about retail, to save costs—one of the biggest costs that retailers have is theft. And believe it or not, it’s a $500 billion-a-year problem. And through the use of computer vision with AI you can help attack this problem by utilizing techniques and technology to help you prevent theft at the front of the store—so at the checkout area—the middle of the store where you sometimes you get in-aisle shoplifting, or even in the back of the store where sometimes you have theft in warehousing and distribution centers.

And then when you think about how retailers make money, they can utilize AI in all sorts of new and interesting ways. First up, they can—AI can help them with shopping experience and driving overall more sales. It can also help with insights to provide the effectiveness and provide feedback on different merchandising-display strategies. It can quickly identify when there’s out-of-stock items on store shelves, and it can also just help keep stores more clean. So sometimes it’s very simple things that can really lead to better results for retailers.

And then one more example that I love to hit, just kind of quickly. So, first, when you think about manufacturing, which I did mention earlier, it is going through a massive transformation. And when you think about the massive transformation that manufacturing is going through, it’s really looking at the types of infrastructure that gets deployed. And then, generally speaking, they’re moving away from what I would call fixed-function appliances—or an appliance that’s doing one thing very, very well—to more software-defined systems that are easier to manage, upgrade, as well as to control different elements on a manufacturing floor.

And through this process you’re seeing these diverse kinds of manufacturing processes get streamlined onto fewer and fewer software-defined platforms, which of course increases the overall efficiency and reduces the infrastructure’s complexity. And with that, once you have this software-defined infrastructure in place, then you start combining with the use of robots, with sensing, with 5G and AI. And then you can do all sorts of magic across a factory floor to help you with everything from inventory management to defect detection. So, truly a ton of opportunity out there across many different vertical markets in edge computing.

Christina Cardoza: Yeah. And I think you hit on the biggest benefits that businesses are really trying to get, which is to save money, make money, have better control, better optimization, better operations. So I think we hit on—we know why manufacturers or why all industries want to move towards this, but, like you mentioned, manufacturing is going through a massive transformation, probably one of the biggest transformations out of all the other industries and one of the hardest, because manufacturers have it a little bit more difficult with their infrastructure in place, and they can’t always have downtime: they can’t make these changes and then stop the entire factory, because then that’s going to stop the whole entire production lines.

So, Martin, I’m curious to hear from you since you focus a lot in this industrial space and we’re talking about manufacturing, let’s dig a little bit deeper into that and what challenges do you see manufacturers face as they try to transform their operations with this edge computing approach?

Martin Garner: Sure. And I don’t mean—in saying this I don’t mean to knock the opportunities at all—they are huge—but honestly there are a few challenges. Now, some of those are faced by everybody who’s trying to use edge computing. The first one is scale. Edge computing is one of those technologies where it’s quite easy to get started and do a few bits and pieces, but as soon as you scale it up in the way that some of the manufacturers need to do, then it all becomes a bit more tricky. So now the larger players are going to have thousands of computers on tens of sites across, say, seven or eight geographic regions, and they have to keep all of that working, updated, and secure, and synchronized as if it was a single system to make sure that they’re getting what they need out of it.

Now, linked to that, with a large estate of edge computing you end up with a really big distributed-computing system, and then you have things like synchronization of clock signals, synchronization of machines, synchronization of data posts into databases. And all of those can be a bit tricky, and not everybody is just good at them to start with. On top of all of that we have different types of data going through the system, a different mix of application software, some cloud, some multi-cloud, some local data. All of that needs a proper architecture—that architectural complexity is also there.

But there are a couple of other challenges which are maybe specific to manufacturing and production industries. So, one is real-time working. This is a special set of demands that, by and large, IT doesn’t have. So, in the manufacturing industries you often have feedback loops which are measured in microseconds. You have to get the feedback there in a very, very short time. You have chemical mixes measured in parts per million. And so timeliness and accuracy are incredibly important here. And what’s really important is that that’s a system-level thing; it’s not just one component, it’s the whole system has to cope with that.

And then, Dan has already touched on this, the sort of robustness of the system. Many factories work three shifts per day, nonstop, 364 days a year. And an unplanned stoppage is a really expensive thing—millions of dollars per day in many cases. And so all of the computing has to support that. And so now we’re talking about things like system redundancy, hot standby, automatic failover, so that if something goes wrong the system doesn’t stop. Now what that means is that you have to be able to do software patches and security upgrades live without interrupting or rebooting the systems at all.

It also means that if you need to expand the hardware—say you want to do a new AI algorithm and test it out on the production line and so on, you’ve got to be able to put that in without stopping the production line. So hardware and software need to be self-configuring and cannot break other things down. And, again, those are constraints that IT doesn’t have. So in the industrial area we need to get used to those as things we have to work with.

Christina Cardoza: Yeah, absolutely. And a little bit of a downer, because manufacturers have all of this momentum to change and want to be successful, and then they hit all of these roadblocks that you mentioned, Martin, on their way to this digital transformation or change. So, since you mentioned all of these various problems and challenges that manufacturers are facing, I’m wondering what can they do about it? What have you seen? How have you seen manufacturers successfully approach this?

Martin Garner: Well, yes, the challenges I listed made it seem a little bit gloomy, but it’s not. So, first thing we would recommend, and have seen people doing, is don’t build your own infrastructure. It is tempting when you start designing a system to kind of design your own networking to go with it and so on. But it’s too slow, it’s too much resource, too expensive over time, and it’s a specialist area. And a bit like loading apps onto a mobile phone, there are good ways to do it and there are bad ways to do it, and everybody needs to think in a similar sort of way with edge computing.

Second thing is to design the system around modern IT and cloud-computing practices. Edge computing needs to work well with cloud services, and that should be almost seamless across the two. In practice a lot of edge computing architectures are very similar to the cloud at many levels of the stack. The main difference is that the machines are smaller and a bit more constrained. And of course in edge computing there are lots of good technology frameworks to choose from. So that what that means is most of the customer-design work can focus at the application level, and that’s where it should be; they shouldn’t be designing all of the stuff underneath.

Now the third one—in the operations-technology world typically we see that equipment and software lifetimes are 10 to 20 years in factories. We think with edge computing it’s sensible to plan for shorter lives, 5 to 10 years or so. And the reason is that the data volumes are going up and up and up and up, and the more data you get, the more you want to do with it, and the more you can do with it. So you’re going to need more AI, more edge computing capacity, and you’re going to have to expand what you have quite quickly.

And then the very last one—historically a lot of customers prefer open-source software to avoid vendor locking, but they actually then see it as something that they have to support as a specialist thing in-house. Now actually the supply side on that is changing a lot, and there is good commercial support for open-source systems, and so we can start to see those alongside the commercial-cloud offerings. So there are a number of significant changes in the way that people can approach this which make it more likely they’ll get a successful outcome as they build out edge computing.

Christina Cardoza: So, a lot of challenges but not impossible, just having a strategic approach to all of this. Dan, I’m wondering, from your perspective, you mentioned manufacturers now taking advantage of things like the Industrial Internet of Things, AI, data analytics to prepare their factories for the future and keep up with these changes. So, wondering how you’re seeing manufacturers approach or adopt this type of technology. Any lessons learned?

Dan Rodriguez: Yeah. So I think—look, I’m going to kind of frame it in terms of the journey that they’re on, and I think that the first part of this journey, it’s really what I mentioned earlier: it’s really the movement away from single-function devices or fixed-function appliances to more of a software-defined infrastructure. And once you move to a software-defined infrastructure then you can consolidate multiple workloads on fewer and fewer devices, and that can have a huge impact on both flexibility and agility, as well as just overall TCO.

So when you step back and you think about what that looks like in practice, historically you may have seen three or four different devices, all with their own computer systems—they all have the independent computer systems. With software-defined systems you can centralize those workloads into a single device and still meet the needs of time-sensitive applications. Now, Martin did mention the phones, so I can’t resist this analogy: can you imagine if you had a specific phone for each application that you had? That would be kind of difficult for you to manage. So you could think about the same thing on a factory floor. Yes, they know how to manage this today, but think about how much more easy it would be, how much complexity we would reduce, if you could actually load more applications onto fewer software-defined infrastructures.

So with that, let’s take kind of a simple example. Let’s think about adding machine vision into a robotic platform. Historically, again, you would just add a dedicated computer for this. However, if you think about where companies are going—and really the future is that you’ll have servers, and servers will host most of, many of these software workloads, and then you’ll be able to provide automated updates in a much more controlled fashion to make it much more easy and efficient to operate and maintain all these different robotic platforms that are going to be across your factory floor. And then when you think about this future, and you think about having this consolidated server infrastructure with this more software-defined layer there, you can also layer on all sorts of new capabilities—everything from quality control, defect detection, to situational and awareness.

So when you think through this, and you think about really being a manufacturer, and you really obviously have to step back and you think about that’s how to solve that business outcome that you’re seeking. And when manufacturers go through this process they really think about what processes they have, what technology they have access to, and what technology they can deploy. And then thinking through the data sets that you can use to capture information to analyze and really help make the best possible decisions overall. Whether, again, it’s that quality assurance or it’s just managing inventory.

Christina Cardoza: Yeah. And building on that phone example, if you had to have a phone for each application that you have obviously that would limit you on the number of applications that you would have, or in the manufacturing space where we’re talking about, that would limit you in the changes and transformations that you are able to make. So, good to hear the different changes and approaches that are enabling manufacturers to really be a part of this movement.

Martin, I’m wondering if you can provide us with some examples. We’ve talked about the approaches, but do you have any specific examples you can share with us on exactly how these industries or businesses have used these approaches and what the outcomes were?

Martin Garner: Yes. So, we’ve found a couple in doing the research for the piece that’s available as a download, found a couple that were really quite instructive. One came from a large oil and gas company, and I was quite surprised but I saw the logic. They run three completely separate networks. So, they have their OT network, where all the machines link together. They have an analytics network for doing all of the detailed work on the data. And they have their IT network for everything else. And they, for security reasons, they insist on an air gap between those. And so we can’t just connect edge computing up to all of them and make good sense of the whole lot across the piece. They have to be separate in this company. And that clearly adds quite a layer of complexity to how you tackle things; it doesn’t prevent you doing anything, you just have to think about how to do it in a way that suits them.

The other one, the other example we found highlighted the scale issue. It actually wasn’t manufacturing; it was a hospital, and they were installing a mesh network to keep track of ventilators and other key equipment and gather information from sensors. And they did a trial with battery-powered nodes and sensors and the five-year battery life from the supply. That all sounded great and they did the trial. That all went well and they loved it. But they realized that as they scaled it up on a very large university hospital they would have thousands of devices with batteries to monitor. They would always be changing batteries somewhere, and they’d risk a lawsuit if they hadn’t done it, which is really dangerous.

So they asked the supplier to produce mains-powered versions instead, because you can’t always get power and connectivity to all the places and the sensors that you need. So the lessons that came out of that for me were the suppliers have to design in the scale that they’re going to face and the security to support the computing from the start. And the customers need to think big at the design phase. They need to work out: “Well, how big could this be? And what would work in that scenario? And then let’s start from that premise and take it on from there.” But I think, as Dan mentioned, it is a journey and you learn a lot as you go through.

Christina Cardoza: Yeah, absolutely. And one thing that comes to mind as we’re talking about this is we’re talking about these big changes in these organizations, and some of the examples you just mentioned—manufacturing space or hospital —these are large bodies of businesses that it takes a lot to be able to, like you mentioned, scale and just do this successfully.

And a theme that we always have at insight.tech—we talk to a lot of different partners and companies—is it seems that nobody is doing it alone: you can’t go on this journey without help from others. And, Dan, I know Intel has a whole ecosystem of Intel Partner Alliance members that can help—bring some expertise, or help manufacturers and other businesses along their journey to be successful and to adopt edge computing. So I’m wondering if you can talk a little bit about the importance of partnerships and how you guys leverage them.

Dan Rodriguez: Absolutely. And when you think about the overall partnerships, I do think about creating an overall ecosystem that is truly diverse and utilizes open and standard platforms. And I think that that was really vital in the—you think about the transformation that happened in the telecommunications industry and the movement to NFE, but it’s also going to be incredibly important as we drive towards this change in manufacturing as well as other industries. So I’ll leave you with maybe one example here.

We’re involved in something called the Open Process Automation Forum, and it’s truly a great place to really democratize technology advancements in manufacturing. And as an example of this we’ve been working with companies like Schneider, Exxon, Dell, and VMware, who came together with us to pull together and drive new and open industry solutions and deliver these open technologies in a field trial to really showcase how you can utilize next-generation automation techniques across manufacturing.

And I will say it does take forums like this, like OPAF, to truly make something like this happen and also really scale it across the industry. So when I think about the ecosystem, it’s incredibly important for the overall health of the market to have a very vibrant ecosystem that utilizes standards and open-based technology so the community can not only have a lot of vendor choice but you’re also increasing that overall innovation spiral and advancements in technologies to really solve those business outcomes that manufacturers are seeking.

Martin Garner: If I could just add one thing onto that. I think I mentioned earlier that edge computing is broad, deep, and complicated. And from what we can tell, very few customers can take on all of that. Very few suppliers can take it on either because they tend to specialize in certain things. And so actually most of the systems we’re talking about will need to be designed with three to five players involved. And I think that’s the expectation we should all bring to this, that it’s going to be a team effort all round.

Christina Cardoza: Yeah, that’s a great point, Martin: it definitely takes a team to make all of these changes happen, and especially the theme that you guys mentioned is to not only make it happen but to be able to scale it. We’ve mentioned things like utilizing standards and not being locked into things.

So, Martin, I’m wondering if you can expand a little bit on that. As businesses start to make these changes, as they want to scale and look towards the future, how can they make sure that the investments or the changes that they’re making today really future-proof their efforts so that they’re able to be flexible as their approaches and their needs change in the future?

Martin Garner: Yeah, and I think we probably covered some of this a little bit already. So the core of it is to build a system that’s as flexible as possible. And in practice that means using commercial hardware and doing as much as you possibly can with the software on top. Anything that’s sort of locked between the software and the hardware is much less flexible and will—the age of that will tell.

The other thing is to emulate things that really work well in other tech domains. And we talked about mobile phones, but I think the app-store concept as a way of being able to download software—one of the great benefits for manufacturers is the flexibility that edge computing can bring. If you can just download some software, reconfigure your machines, and start producing something different fairly quickly, that is a huge benefit that they’ve just never had before. Now obviously an app store or a phone are a bit of a simplification when it comes to factories, but I think the concept is good and we need to kind of embrace that sort of idea.

Christina Cardoza: Yeah. And as we’re talking about future-proofing and looking towards what’s coming next, Dan, what do you see coming next? How will this space continue to evolve over the next couple of years, and how will the role of edge computing change in industrial environments as we move forward?

Dan Rodriguez: Absolutely. And I would think about this—as we all know, things need to go through phases and it is truly a journey. So the first phase, as I mentioned, it’s really about that migration towards software-defined infrastructure, with workload consolidation supporting multiple applications on fewer and fewer servers or devices. And once that’s established then all of a sudden you can do all sorts of cool things with AI and inferencing to really help you across your factory floor, improve the overall output of your production, but also do statistical analysis to even help with the health and preventative maintenance of all sorts of equipment on your factory floor.

And then I will say obviously generative AI is all the buzz across many industries today, and it will, over time, be incorporated in this strategy as well. And I’ll say that it’s going to be super exciting to see all the gains in production, reduction in defects, and also the use of new simulation and modeling techniques of that factory in the future.

And then, Christina, you did mention healthcare earlier today, so I do want to just maybe say a couple quick things on that as well. And when you think about just the broad role of AI and edge in healthcare, it’s going to do many different things including even helping physicians—really assist them—and helping to improve patient diagnosis as well as treatment. Really enabling them to have more timely detection as well as decision-making.

In addition to that, across healthcare we are starting to see the broad use of distributed computing to enable all sorts of AI-based use cases, including drug discovery and diagnostic tools to truly power connected digital hospitals and labs, and also support real-time data analytics across that entire kind of hospital footprint. So, lots of exciting times across manufacturing, healthcare, and then obviously I talked a little bit about retail earlier.

Christina Cardoza: Yeah, absolutely. Martin, is there anything you wanted to add or that you found from the State of Edge Computing report that you guys did? How the role of edge computing is going to change or evolve, not only in manufacturing but as we get into some of these other industries as well, like retail and healthcare.

Martin Garner: Yeah, sure. Thank you, Christina. There were a couple of things that came out that—not so big right now, but you can kind of see them coming, and enough people that we talked to sort of waved a small flag to say, “Keep an eye on this one, because it’s coming.”

First one is around the fact that manufacturing processes for the company—those are mission critical, and any unplanned downtime, as we said, is really, really expensive. So there’s a key question about how do you learn from things that have gone wrong and ensure that those mistakes are not repeated. Now, the aircraft and the processing industries have always been quite good at this. They have this concept called functional safety, and their aim is to make systems more and more resilient when things go wrong by making sure that the failure modes are understood and mitigated, and progressively they build in new scenarios so that they can kind of cope better under fault conditions. That to us looks like an important area for more general use across manufacturing, although just today it’s not so large.

And another one is linked to the industrial robustness that we mentioned, and I really like this one. If an application can run on one machine and automatically switch over to another one if there’s a failure, well then you get a question about, well, which is the best machine for it to run on normally? What’s the optimum setup for these thousands of computers and all the applications? What’s the right way to do it? And as soon as you think about that, you realize that optimum could mean fastest, it could mean the lowest latency, it could mean the highest uptime, cheapest on capital costs, cheapest on operating.

There are lots of different parameters that you could optimize for here, but really it’s all about optimizing the system in different ways for different things going on. And customers won’t want that to be a complex process; they’d like it to be automatic if possible. We haven’t found anybody who’s actively exploring this yet, but we do expect it to become a thing fairly soon in edge computing, and to see some sort of software tools come into the market that allow you to improve the setup of your edge computing estate over the next few years.

Christina Cardoza: That sounds great, and we’ve been talking for a little bit now. Such a large conversation—I always feel like we only scratch the surface in some of these conversations. I know there’s still a lot more to learn in this space and a lot more for edge computing to go, but unfortunately we are running out of time. So, before we go, I just want to throw it back to each of you—any final thoughts or key takeaways you want to leave our listeners with today? Dan, I’ll start with you.

Dan Rodriguez: Yeah, no—I appreciate the conversation today, and I will say there are a few final thoughts that really come to mind for me. First, edge computing is really fundamentally changing nearly every industry, from retail to manufacturing to education to health as well as transportation. And second, when you combine edge computing alongside AI and 5G it’s driving a lot of transformation. It’s allowing IT, OT, and CT to truly converge and really accelerate digital transformation, and this truly creating a massive opportunity and really the opportunities are endless. Everything from precision agriculture to robots that sense and cities that intelligently coordinate across vehicles, people, and roads.

And third, I do strongly believe that industry collaboration and open ecosystem are fundamental and key to all of this. As Martin mentioned, it is going to be a team sport, and you’re going to need multiple players to drive these solutions and implement them in a way that’s easy for customers to consume the technology and easy for them to be able to scale the technology.

And with that, Intel, along with the rest of the industry, is truly investing to drive this unified ecosystem that really understands the pain points across many different industries and helps them solve them, and, again, in a way that’s easy to deploy and scale. So I look forward to continuing to working with all sorts of customers and partners on this journey, and of course working with Martin as he analyzes this journey and provides guidance to all of us in the industry.

Christina Cardoza: Absolutely. And Martin, any final thoughts or key takeaways?

Martin Garner: Yeah, thank you, Christina. And Dan, thank you; that was a great job of describing the vision. And it’s a little bit of an analyst cliché to say, “Oh yes, but it’s complicated,” but it actually is complicated, and I think for many companies who are involved, whether in manufacturing or other sectors, that vision—they can kind of see it and they get it, but it feels quite a long way away for them. And so, from our point of view, from the research we’ve done, we think it’s quite key for customers to get started, to do something, even if it’s quite small. And when you do that, pick out a few carefully chosen partners and work with them.

We think at the start you should be fairly ambitious in how you think about all of it—what scale could this all get to—and you won’t get there all in one go. You’ll probably find though that it’s not the technology that’s the limiting factor in how far or fast you can go; it’s probably the organization, and that often is quite a limiting factor—whether that’s budget or other organizational factors. So you will need to invest at least as much time and effort into bringing the organization along with you as you do in working out what technology to use and how much of it and where and so on. But that is the journey, I think. But I don’t think anymore that the technology is the limiting factor, and that plays to Dan’s vision, I think, really quite nicely.

Christina Cardoza: Yeah, I can’t wait to see how, like you guys mentioned, edge computing with a combination of things like AI, computer vision, and IoT not only continues to improve business operations but changes people’s lives. So I encourage all of you to keep up with Intel, follow them, see how they can help—their ecosystem and their partnerships—how they can help you along this journey, as well as all the technological advancements that come out to, like we said, make things a little bit uncomplicated.

And also take a look at the report from CCS Insight on the state of edge computing on insight.tech. There’s a lot more information than what we just covered today on how industries can start tackling this and the technologies involved there. So, with that, I just want to thank you both again for joining the conversation; it’s been very insightful, informative. And I want to thank our listeners for tuning in also. Until next time, this has been the IoT chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Tools of the Trade: Empowering AI Developers to Innovate

Artificial intelligence is disrupting industries, creating opportunities, and enhancing customer experiences. AI developers are at the forefront of this revolution, building the solutions that will shape the future. That’s why it’s so important that they equip themselves with the right tools to bring their AI solutions and computer vision applications to life.

To learn all about the latest trends and technologies developers should keep up with, we talked to Intel’s Bill Pearson, VP, Network & Edge Group, General Manager Solutions Engineering; and Adam Burns, VP IoT, Director Edge Inference Products. Pearson and Adams discuss industry trends and the Intel technology, tools, and programs that make it easier to keep ahead of the game.

What are the industry trends driving the need for IoT, edge, and AI solutions?

Bill Pearson: There are four industry trends that come to mind:

  • The world is becoming more software defined. This is true of networks, applications, and infrastructure. AI has become more pervasive across nearly every use case.
  • The rate of change is rapidly increasing.
  • The way that the world is evolving in this space is getting faster and we’re moving very quickly.
  • There is a need to move towards the simplicity and accessibility that’s expected by modern AI developers.

Think about it as a cloud-native paradigm: All those learnings that developers gathered, they now expect to apply everywhere else.

Take what Apple’s done for the phone. They’ve shown us that that any experience should just be delightful. It should be simple and straightforward. And now that type of expectation is entering into the development space. When you bundle it all up, basically we need to build software-defined AI use cases that are super simple for people to apply in their daily lives.

Adam Burns: I wholly agree. If you apply those trends to the shift in the market, particularly in the edge IoT world, there’s been a slow burn that has rapidly accelerated over the last few years. In the embedded world that Intel started over 30 years ago, the focus was on reliability. Developers were looking for a combination of software and hardware that’s ultra-reliable that could be used in production for five to 10 years without having to worry about it. Now the shift is to “I want to know everything that’s happening with that device and the system it lives in. I want to know how to make it more efficient.”

This is enabled by all the things Bill talked about in terms of software-defined systems, AI, and how all this is coming together. And that shift from a developer and operator mindset fundamentally changes what people are asking for versus what we traditionally think about embedded computing.

“It’s an exciting time to be a #developer, an exciting time to be part of building these modern solutions that we’re all on this journey to help create it” – Bill Pearson, @intel via @insightdottech

What are the challenges developers face when building edge AI applications?

Bill Pearson: The first one is just how do I get started? There are so many options and a lot of noise in the industry. First, people are asking what is the path to get started in accomplishing their goal and KPIs. Next, they’re looking for the most effective way of achieving what they’re trying to do in their unique use case.

Third, developers want to identify the right solution that’s going to best meet that use case. For example, if they take something from a vendor and it offers a reference solution or a product, is that going to meet the need that they’re intending? And for Intel it’s about how we are helping developers and making sure that they can not only accomplish their goal, but the solution they choose helps lead them there.

Part of the solution is the hardware that goes in it. I saved this for last because it’s not the first choice that the developer makes, but it is an important choice. And Intel wants to make it easier for a developer to use the right hardware that’s going to give them best outcomes. So that they don’t build something that’s too big, consumes too much power, produces too much heat, or doesn’t fit in the physical space—particularly at the edge.

Adam Burns: So say I want to produce a computer vision application to do machine defect detection on an assembly line. There’s lots of good classification models out there. For example, our partner Hugging Face has one of the largest model ecosystems in AI, with an array of models or transformers that people can apply to computer vision.

Now that they have a general model that works well, how do they fine-tune it for their specific application? A sophisticated data scientist may want to take a wealth of data and do that training themselves. But application developers may want to have specialized tools like Intel® Geti—which can take relatively small amounts of data in confined levels of training, compute—and be able to produce a very accurate model.

Now how do they deploy so it’s optimized to the right type of hardware? Developers can use something like Intel® DevCloud, Intel Geti, and the Intel® Distribution of OpenVINO toolkit to compress the model down to a size that’s suitable for the edge. And then they can use DevCloud to determine if it’s best to run on an Intel® Core with a GPU, or if it should be running on an Intel Atom®. Or do they need to move up to Intel® XEON® because it’s a little heavier workload? It’s these types of decisions that Bill talked about in terms of finding the right application, tuning it for purpose, and making sure it’s deployed on the right hardware.

We want to guide developers through that complete workflow. We find especially in AI that more than 50% of the ideas developers have with those models don’t make it to production. So, for us it’s about easing their path to making it to production and deploy the solution in the most cost-effective means possible.

What are some other Intel tools that can ease that path?

Bill Pearson: Adam did a nice job of setting this up. When you think about solutions, let’s look at the Intel® Edge Software Hub and all the reference implementations it has. For example, a developer wants to know how to put together something for frictionless checkout. The Edge Software Hub can show them how the different ingredients fit together, the code that helps them put it together, and then go play with it, if you will.

You’re seeing that increasingly. We offer Jupyter notebooks, which are part of the extended OpenVINO toolkit with hands-on sample sets that developers can immediately apply and now run those on DevCloud. So immediately they can say, “I’m interested in AI solution, I can use OpenVINO, and I’ve got these Jupyter notebooks, let me try them right now.”

We put these things together, as Adam was saying, into this workflow where they can visualize the solution they want to create, use the samples and reference we provide for how to do it. Then they can immediately go and use our tools to get a feel for how they’re going to apply it, what hardware they’ll need. And then of course they can always use Geti and OpenVINO to figure out how to build that into the product they’re ultimately trying to deploy.

Can you talk a little bit more about the OpenVINO toolkit?

Adam Burns: OpenVINO is about expanding its breadth from a model and network perspective. While we started with a focus on computer vision, we see more multimodal uses of AI. An industrial example is using computer vision applications to understand defects and audio signatures to listen to a motor or bearing and determine whether or not a failure may happen on that system.

We see more and more customers interested in using generative AI, combining different types of AI, and we’re expanding OpenVINO to keep up with those types of models. For example, we publish blogs with Hugging Face on stable diffusion performance. We’re working on new open chatbot systems like Dolly and LLaMa to make sure we have the right performance for those. And we just keep focusing on breadth and developer efficiency.

So, we offer a diverse roadmap to meet a diverse set of developer needs. With the OpenVINO 23.0 release and the performance and efficiency cores we have in our CPU roadmap, we’ve automated the usage of those cores for what is most efficient for the system and the workloads running on it.

How is OpenVINO supporting new trends like generative AI?

Adam Burns: What’s happened from a market perspective is that generative AI is part of every conversation in every enterprise. We’re seeing tremendous demand and generative AI is starting those conversations.

And we’ve been focusing on optimizing OpenVINO through several techniques, starting out with popular NLP-style models and ChatGPT, for example. And we look at optimizations and portability within OpenVINO.

But it isn’t the answer to every problem. Where generative AI has a ton of power is when you start to look at not just the main application but all the integration work. It has the power to understand interfaces and help customers automate integration, system settings, and a number of different things. And it makes operators and developers incredibly effective.

Leading AI developers in the industry are saying things like, “I only write about 20% of my code now because generative AI is doing a lot of the code completion and the setup-type work. I can really focus on the algorithm and the unique places where I’m adding value.” So, it’s an amazing force multiplier to make developers more efficient. It’s been really interesting to see what applications enterprises are coming up with. And from an OpenVINO standpoint, it’s critical that we support that not only in the cloud, but also adapting and fine-tuning these models so they’re purpose-built for the edge.

Bill Pearson: Despite all the years of research, it’s early days and we’re just getting started. As generative AI has broken out into public perception, it created more awareness of AI. But it has also created more experiments and it turns out it’s remarkably good for that. There are a lot of interesting use cases that are being explored, but I don’t think the story’s been written yet.

What’s interesting for me is that we have two things going on. One is generative AI creates this art of the possible. That story is just one for the imagination, and we’re going to be amazed by where that goes. Practically, many customers today can use that as an opportunity to explore what they really need: the KPIs they’re trying to achieve, the use case they’re trying to implement. But in many cases, we can do that without generative AI, and frankly there are great solutions that are more focused and more cost-effective to help with that. The key is to help our customers find the right solution to the problem they are trying to solve.

For developers who want to learn more, how do they get started?

Bill Pearson: If you’re looking to build solutions, the Intel® Developer Zone is the place to start. You’ll find all the tools that Intel provides, like the Edge Software Hub and OpenVINO. If you’re specifically interested in building edge AI applications, you can go directly to OpenVINO.ai, which is another great starting place.

Adam Burns: I think we live in a world where people want to get hands on and tinker with things. That’s where people can use the Edge Software Hub to really dig into the solutions and understand them.

Is there anything else either of you would like to add to our conversation?

Bill Pearson: For me, there is no better time to be in this industry, with its exciting, rapid pace of change in the marketplace, software-defined everything, and AI becoming so pervasive. It’s an exciting time to be a developer, an exciting time to be part of building these modern solutions that we’re all on this journey to help create it.

Adam Burns: Building on what Bill says, it’s incredibly rewarding and satisfying to see what developers and customers and partners are able to do with our technology. Just one example is Royal Brompton Hospital and pediatric lung disease detection. It so happens one of my cousin’s daughters has lung disease. You get these cases where we immediately can see tangible value, whether it’s making sure somebody gets the diagnosis they need faster or making a factory more efficient. Being able to be part of that and enable developers to create what they can is incredibly satisfying and rewarding.

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

AI Computer Vision: Retail Checkout Without Barcodes

AI computer vision is taking the retail sector by storm. It’s no wonder: Vision-based solutions benefit every type of stakeholder in the space.

“Retailers can use computer vision technology to solve numerous business problems, from streamlining operations and alleviating staffing shortages to preventing theft,” says Zhang Jiabo, Founder and CEO of Winmore Digit, a specialist in AI-enabled retail solutions. “And customers benefit since computer-vision solutions help reduce checkout wait times and provide engaging digital experiences.”

Best of all, intelligent retail solutions are built with robust AI software development kits (SDKs) and high-performance hardware designed for edge computing. This makes it easier for retailers and systems integrators (SIs) to tackle business challenges efficiently and cost-effectively. It also opens exciting possibilities for integrating multiple retail solutions to maximize the benefits of AI in retail.

AI Product Recognition Without Barcodes

Identifying products at checkout is a prime example of how computer vision can solve common retail pain points. Everyone knows the frustration of long waits at the grocery checkout. A main cause is shoppers who are purchasing items without barcodes—such as fresh produce or bulk dry goods. Clerks must memorize and manually key in product codes and then weigh the items to obtain the correct price, which is time-consuming and prone to error.

Self-checkout tends to be even more cumbersome, with buyers forced to navigate complicated menus or search for their product by name or image. If they make a mistake, such as an incorrectly weighed item, they must wait even longer for staff intervention.

AI-enabled product detection offers a better way to deal with items that don’t have barcodes at the point of sale (POS). The Winmore Barcodeless Goods Identification Kit, for example, pairs AI-powered kiosks and scales to automatically recognize, weigh, and price products—whether or not they have a barcode.

The solution’s AI visual recognition model can identify 2,000+ types of common barcodeless goods with more than 99% accuracy. Recognition is fast: less than .2 seconds when using a fully trained model. In addition, the AI computer vision model self-trains with machine learning after deployment, becoming more accurate over time in its image processing. The solution benefits everyone involved: reducing overhead and streamlining processes for retailers, improving customer experiences, and freeing up staff to do more meaningful and enjoyable work.

This sort of solution would have seemed like science fiction even a decade ago—but thanks to advancements in edge processing hardware and open-source AI SDKs, innovative solutions providers are building and deploying such systems all over the world.

Intel technology plays a central role in bringing Winmore’s solution to market. “The Intel® OpenVINO toolkit is a powerful resource for developing and optimizing AI visual recognition models,” says Zhang Jiabo. “Intel processors are also crucial since they are performant, stable, and especially well-suited to AI computer vision workloads.”

#AI-powered solutions built on robust, easy-to-integrate #technology can be deployed quickly and efficiently at scale. Winmore Digital via @insightdottech

Global Retailer Modernizes Operations

AI-powered solutions built on robust, easy-to-integrate technology can be deployed quickly and efficiently at scale. This offers retail businesses a golden opportunity to modernize brick-and-mortar locations without high capital expenditure.

Winmore’s deployment at a large retailer is a case in point. Its customer is a U.S. Fortune 500 company with more than 10,500 stores worldwide. The retailer was looking to modernize operations at its locations in China. It had taken steps toward digital transformation, but its equipment was already becoming outdated and couldn’t process the large amounts of data needed for vision-based applications.

Winmore worked with the retailer to implement an edge server solution that provides the additional computing power needed to support its advanced product recognition algorithm. To keep costs down, Winmore looked for opportunities to use existing infrastructure whenever possible—for example, by transforming compatible in-store equipment into AI-recognition electronic scales. The result was a modernized POS infrastructure and improved efficiency across multiple retail locations.

“A big advantage of computer vision-based solutions is that they tend to be quite flexible and modular,” says Zhang Jiabo. “Our experience shows that it isn’t always necessary to replace in-store infrastructure completely. Retailers can modernize operations by starting with their existing equipment and add to it as needed.”

Solution Integration Enables Autonomous Operations

Another example of the flexibility and modularity of AI-enabled retail solutions is the way they can be combined with one another. This allows retailers and SIs to deliver integrated solutions that are greater than the sum of their parts.

Winmore, for instance, offers a loss prevention solution that can be combined with its Barcodeless Goods Identification Kit. The platform works in concert with the product ID solution’s intelligent weighing capabilities—and collects video data through in-store cameras, performs behavior analysis at self-checkout kiosks, and watches for abnormal events like missed scans and incorrect barcodes.

When the vision-powered self-checkout and AI loss prevention solutions are deployed together, retail locations can move much closer to autonomous operations than they could with either solution in isolation. This kind of “compound benefit” to stacked AI deployments is typical of digital transformation in the retail space.

“AI-enabled solutions are mutually reinforcing,” says Zhang Jiabo. The more intelligence one has in a retail location, the more efficient operations become. In addition, when businesses begin to acquire, centralize, and analyze a greater amount of data, they are able to make smarter decisions going forward.”

Growing the AI Ecosystem

There are many benefits of computer vision-based technology for retail stakeholders. And as more businesses adopt AI-powered solutions, the benefits will extend beyond retailers and shoppers.

In the future, the widespread adoption of smart retail will also create valuable opportunities for hardware manufacturers, independent software vendors (ISVs), and SIs.

Winmore provides its solution as an application to hardware partners that want to develop a complete AI recognition solution without getting involved in AI development. The company also offers its computer vision algorithm to ISVs via an SDK—allowing development shops specializing in POS or weighing software to incorporate AI functionality with ease.

This is why computer vision in retail is a win for everyone—both inside the retail sector and beyond.

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Continuous Data Protection with Zero Trust

Conducting business in the digital economy incurs significant risks, with would-be cyberattackers and data thieves lurking everywhere. The possibility of a network breach is ever present and organizations look increasingly to a Zero Trust architecture to shore up defenses.

The concept of Zero Trust represents a fundamental rethinking of cybersecurity: How do you operate safely in IT environments you can’t verify to be secure? Even if intruders penetrate the network, once inside, they find obstacles at every turn through multilevel validation requirements. Zero Trust subjects users, systems, and devices to a validation process with each attempt to access an asset.

“The idea is to trust nothing, verify everything, don’t assume. Re-verify as often as you feel you need to,” says Ken Urquhart, Global Vice President, 5G, at Zscaler, a Zero Trust security solutions vendor.

Cybersecurity teams traditionally have operated on a “siege mentality” where “our IT stuff” is separated from cyberattackers by one or more firewalls, Urquhart says. But the proliferation of devices on-premises, the cloud, the edge, and IoT systems has widened and blurred the idea of a simple perimeter, creating increasing challenges for data security management.

When attackers succeed, they often lurk undetected within an organization’s network for an average of nine months, stealing data and causing disruptions. Companies are left guessing how long intruders creep around their systems or how much data they siphon off. “Today’s organizations have to deflect 100% of those attacks successfully in order to be safe, whereas the attacker only has to be successful once,” Urquhart says. “With a firewall approach, which has been with us since the 1980s, the basic assumption is that if you get inside, you’re pretty much treated as a trusted user.”

Zscaler provides continuous data protection, using encrypted communications, monitoring, and analytics to prevent attackers from seeing what a company does—or even seeing the organization at all. Devices and applications protected by Zscaler technologies are rendered undetectable to other devices on a network. “You can’t attack what you can’t see,” says Urquhart.

The idea is to deliver a seamless experience across the environment, no matter how far it spans across the globe. Zscaler enables clients to focus on their core business rather than constantly working to fend off threats.

Global organizations such as FedEx, British Petroleum, Siemens, and General Electric have turned to Zscaler for years to #secure their sprawling #GlobalNetworks. @zscaler via @insightdottech

Zero Trust Tackles Cybersecurity Challenges

Global organizations such as FedEx, British Petroleum, Siemens, and General Electric have turned to Zscaler for years to secure their sprawling global networks. For example, Siemens reduced its infrastructure costs by 70%, and 80% of employees at General Electric said in a survey that Zscaler makes it easier to do their jobs. And one oil and gas customer facing persistent ransomware issues saw a whopping 3,500% reduction in attacks after implementing Zscaler technology.

Zscaler was formed in 2008 at the very start of the Zero Trust movement. “We operate a secure global communications network that scans over 18 petabytes of data a day, handling 320+ billion transactions—over 20 times the number of Google searches per day—while handing 9+ billion daily incidents and policy violations and interpreting 500+ terabytes of metadata and signals daily using AI/ML,” says Urquhart.

Organizations often struggle to secure their environments against a dynamic threat landscape. As new threats emerge and old ones morph like mutating viruses, cybersecurity teams keep adding tools and protocols to fight them. They also must secure new applications and systems that organizations add to leverage new functionality.

“And over time, this builds up this set of solutions that need different configurations, different patch levels, different patch frequencies, different administration interfaces,” Urquhart says. Before long, the process gets overly complex, potentially creating still more vulnerabilities.

Further complicating things, organizations rely on systems they don’t (or can’t) control, Urquhart says, adding: “We must operate over telco systems we don’t own. We must operate over networks we don’t own. We put data in public clouds we don’t own. We’re given assurances of the security, but very seldom are you invited to do a complete security audit and review line by line every piece of code for every vulnerability—an undertaking no organization is really in any position to carry out completely even if invited to do so. You have to take someone’s word for it.”

With Zero Trust, anytime users, devices, networks, apps, and data attempt to make a connection, they are subjected to multiple, ongoing levels of validation such as multifactor identification, biometrics, and hardware keys. The process also recognizes when users try to log in from different devices, from different locations, at irregular times, which trigger extra validation steps.

Partnerships Are Essential to Data Security Management

Zscaler’s Zero Trust architecture relies on automation and orchestration to monitor and analyze traffic in real time. Data is encrypted and monitored as it traverses multiple clouds and networks across countries and continents. To make it all happen, Zscaler works with multiple partners such as Supermicro, CrowdStrike, and Intel, which deliver different pieces of the technology solution. And Zscaler collects, shares, and receives threat intelligence from 40 partners to isolate, analyze, and create blocking rules.

Powered by Intel® Xeon® Scalable processors, Supermicro hardware supports ZScaler’s edge-to-cloud Security Service Edge (SSE) technology, which inspects all edge and remote worker traffic before routing it to its destination.

Zscaler integrates with CrowdStrike multiple ways to reduce attack surface, minimize lateral movement of threats, and ensure only trusted and protected devices access authorized applications and data.  Zscaler intercepts unknown and malicious files before it reaches the end-user and can trigger cross-platform containment action via CrowdStrike.

By leveraging the CrowdStrike device posture score, a Zscaler admin can configure policies to block access from devices with low trust scores, or allow access only via remote browser isolation, preventing data exfiltration while enabling high user productivity. This prevents valuable intellectual property and personally identifiable information from getting out, while stopping ransomware and other malicious payloads from getting in.

As an added benefit, Zscaler’s approach reduces network and communications infrastructure, he says, noting: “All a customer needs to do is connect their offices, remote workers, or data centers to the local internet and Zscaler can take it from there.”

Technology from partners like Intel is key to Zscaler’s Zero Trust approach. As Intel optimizes its hardware, Zscaler is often one of the first companies to adopt it in its ongoing attempts “to find efficiencies everywhere all the time,” Urquhart says.

Zero Trust is currently the most efficient and effective approach to cybersecurity. While the concept isn’t always well understood due to its defense-in-depth paradigm compared to the simpler “us on the inside, attacker on the outside” firewall metaphor. Zscaler has been at this for 15 years and we keep refining our approach by adopting new technology and absorbing customer feedback,” Urquhart says. “We’re trying to tell the world there’s a different way of approaching cybersecurity.”

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Edge AI and CV Power Supply Chain Transformation

Until recently the idea of a “supply chain” was an unknown for most people. The pandemic changed all that when essential goods became hard to find. Now the term is part of our daily lexicon. But for organizations that need to move goods from point A to point B, the Covid crisis exposed longtime systemic efficiency challenges. Fortunately, the past few years have seen advanced technologies like AI and machine learning. These tools are making all the difference not just to these supply chain organizations but also to the expert SIs and VARs that serve them.

With technology advancing so quickly, companies need solution integrators that understand supply chain operations and how to apply the right technologies. One example is Global Solutions Distributors BlueStar, Inc., which partners with companies like supply chain-AI and image-recognition solution provider Siena Analytics. Together they bring the software, hardware, services—and, of course, logistics—that VARs need to deploy edge-to-cloud solutions for their supply chain customers.

John Dwinell, Founder and CEO of Siena Analytics, joins us to discuss challenges and opportunities in the supply chain space (Video 1). He talks about the importance of real-time data to smart logistics and tracking, the far-reaching benefits of system visibility, and how a no-code solution can bring the esoteric art of AI right to the level of the domain users who really understand the issues at stake.

Explore how companies are taking supply chain transformations to the next level with John Dwinnell, Founder and CEO of Siena Analytics. (Source insight.tech)

What is the state of supply chain today and what are the current challenges?

As e-commerce has grown, supply chain organizations are under so much pressure to get higher throughputs, better efficiency, and to be able to scale. Visibility has really been critical to understanding where the bottlenecks are and how to deal with them so that businesses can realize greater performance and precision, and can have better quality overall. Quality and visibility are really big pressure points in supply chain today.

Another common challenge in supply chain is vendor compliance—that incoming quality of product. So having a real, deep understanding of the supply chain is important—again, having the visibility to identify at scale what packages are compliant and why; and what packages are not compliant and what’s wrong; and to be able to provide that feedback to the suppliers so they can make improvements.

We started out with IoT capturing data and images in the supply chain, and along the way came the capability to bring AI and AI vision into that IoT solution, which has really helped transform visibility.

Tell us more about those recent technology advancements addressing these challenges.

IoT has really flipped the problem on its head in a lot of ways. Traditionally, there’s enterprise data telling us, for example, “This is the size of a case, and so X number of cases is going to fill a trailer.” And IoT is looking at the cases and saying, “Well, actually this is the size of the case.” It’s real data flowing up. The accuracy and precision of the information is critical to being able to make those adjustments at a reasonable cost.

IoT is feeding back very precise information about the good and the bad as product comes in. And that’s critical to being able to quickly adjust to changes in volume in the supply chain, and to still have the capacity and the throughput to move those through. That real data in real time allows you to make adjustments so that you can allocate resources correctly. There’s a lot of benefit and sustainability there: Obviously, getting those numbers exactly right allows you to plan your supply chain more efficiently.

How are you using artificial intelligence to make efficiencies happen?

AI is a big, big factor here. The volumes are very high, and the speeds are also very high. We are looking at over 50 million cases every day. That’s just a tremendous amount of effort. AI changes the formula for that completely, because we can literally look at all six sides of every case flowing into and out of a warehouse. We can see what condition a case is in, how it’s packaged, how it’s labeled, what’s there and what’s not there. And we can answer the question of how does it meet the standards? How does it meet the supplier requirements? And doing that at scale in real time was just not possible in the past. AI and the platforms that we work on really made it possible.

What are some best practices for implementing these complex technologies like AI?

It’s true that there’s a certain intimidation factor with AI. If you go back only a few years, it was kind of a dark art; you needed a real specialist. There have been a lot of advancements there.

We have a very friendly, no-code environment that takes away the mystique of the training. We’ve simplified things so that we can capture the images, label that data, train new models using the platform, and engage the customer’s domain experts to help with that themselves. They really see these models come together, which is very exciting. And we also train them to recognize that what’s really critical are small variations from one customer to another—that’s exactly what they need to see. The AI model is very adaptable to that, but you need the platform, and you need the tools to make it approachable.

And we talk a lot about the tools, but connecting the domain knowledge with the technology is also really important. So one thing I want to make sure I point out is that Siena is now part of the Peak Technologies family. Peak has really broad experience in supply chain, and really understands customers’ challenges in that realm. So it’s not just the tools, but the breadth of experience that Peak has that we can bring to the customer base to help solve their problems.

How can businesses in this space ensure the privacy and the security of their customers?

Security is really important, especially with IoT. You’re capturing data in real time right there at the edge, but it needs to be brought to the enterprise, or sometimes to the cloud. And those connections from edge to cloud or edge to enterprise need to be secure. So we work very closely with information-security teams. We leverage the technologies and platforms from partners like Intel and Red Hat to be sure that we have a very secure environment.

What are some other partnerships Siena Analytics has, and what has their value been to you?

I think IoT, as exciting as it is, is still evolving. So getting the right solutions, the right technology pulled together is extremely important to us. We work very closely with Intel, we work very closely with Red Hat. We work closely with other partners like Lenovo on the hardware. Splunk is an important partner for us in terms of analytics.

We’ve been able to watch the technology as it evolves, but also to be a part of the conversation to help guide the technology that’s needed. And I can’t thank our partners enough. They’re really critical to making this all work.

What comes next for the supply chain space?

I’ve been in this business for a long time, and I see this as the very beginning. AI in supply chain—really intelligent supply chain—is just beginning, and there are tremendous opportunities for growth. Edge-to-cloud is something else that’s also really bursting onto the scene, and it still has tremendous opportunity to grow.

Any sophisticated supply chain organization needs real-time visibility, and I think that will continue to grow, too. I think we see a lot happening in standards and collaboration as well. Companies work very closely with a vast array of suppliers, so standards are really critical to making the whole supply chain work together and work efficiently.

Are there any final thoughts or key takeaways you’d like to leave us with?

I’d say, be open to the technology. It’s moving quickly, but it can bring a lot of efficiencies. Find partners who understand supply chain and understand the technology—that’s really critical. Someone who can work closely with you on this journey and help bring in the best solution, so that you can have the most intelligent supply chain possible.

Related Content

To learn more about AI-powered supply chain logistics, listen to AI-Powered Supply Chain Logistics: With Siena Analytics and read AI Unlocks Supply Chain Logistics.

 


About BlueStar

BlueStar is the leading global distributor of solutions-based Digital Identification, Mobility, Point-of-Sale, RFID, IoT, AI, AR, M2M, Digital Signage, Networking, Blockchain, and Security technology solutions. BlueStar works exclusively with Value-Added Resellers (VARs) to provide complete solutions, custom configuration offerings, business development, and marketing support. The company brings unequaled expertise to the market, offers award-winning technical support, and is an authorized service center for a growing number of manufacturers. BlueStar is the exclusive distributor for the In-a-Box® Solutions Series, delivering hardware, software, and critical accessories all in one bundle with technology solutions across all verticals, as well as BlueStar’s Hybrid SaaS finance program to provide OPEX/subscription services for hardware, software, and service bundles. For more information, please contact BlueStar at 1-800-354-9776 or visit www.bluestarinc.com.

The Promise of Smart Retail in Today’s Digital World

Despite conventional wisdom, the brick-and-mortar retail space is growing – not shrinking. But to be competitive in an online world, retailers must deliver the shopping experiences consumers have come to expect. This may include frictionless self-checkout, omnichannel purchase power, personalized product information, and informational kiosks that make for an attractive shopping environment.

Deploying these transformative retail use cases depends on a wide range of new technologies and edge-to-cloud solutions. But no single supplier can deliver these solutions alone. This is why companies like VSBLTY, a retail-technology provider, partners with Global Solutions Distributor BlueStar, Inc.

BlueStar works with an ecosystem of hardware, software, and other developers to build solutions designed to help retailers—and the integrators that serve them—streamline operations and grow profitable sales. In addition to offering ready-to-deploy retail solutions, BlueStar backs them with service, support, logistics, and technology expertise—enabling VARs, systems integrators, and ISVs get to market faster.

Jay Hutton, Co-Founder and CEO of VSBLTY, believes that computer vision could be the answer to the online vs. on-the-ground dilemma. He talks about the future of brick-and-mortar shopping in a digital world, retail’s place in the omnichannel experience, and the benefits of digital signage to the consumer—a.k.a. you and me.

How have physical stores had to compete with online shopping over the past few years?

Physical stores are not dead, nor are they dying. Consumer behavior has changed, for sure, resulting in a certain amount of commerce being fulfilled online. But that doesn’t mean the store is dying; it’s evolving. The pandemic has caused retail to really look at the consumer experience, and to modify it in a way that—I don’t want to say it’s more like online—but it’s more like online. It delivers immediate response and immediate engagement in a way that brands value and consumers value.

There’s this merging of online with offline that requires the store to reinvent itself by embracing digital more, having more consumer engagement, and being more consumer-centric. I think this is a challenge to a lot of traditional retailers, but I’m delighted to report that they’re really stepping up to that challenge.

What do we mean by “Store as a Medium”?

The store has always been a medium for messaging. In the past, that took the form of poster boards or stickers on the floor in front of the Tide detergent. And that was meaningful in the way it redirected brand spend: Brands spend money to drive impressions at the point of sale—at the moment of truth when you are most likely to be influenced by a message.

What’s different in the past two or three years is how all that’s becoming digital. We’re talking about stores embracing digital surfaces: It could be a digital cooler; it could be an endcap that’s got a digital screen embedded in it; it could be shelf strips that are interactive and drive attention, gaze, and engagement at the point of sale.

These are all ways in which stores invest in and embrace turning the store into an advertising medium. We know that the internet is an advertising medium. We know that a billboard on the side of the highway is an advertising medium. We’re now at a point where the store itself is an advertising medium, or channel. When the big brands, like Unilever, Coca-Cola, and PepsiCo, make decisions on which channel to invest in, now the store is a legitimate option because it’s where the consumer makes decisions. It’s where the brand can deliver its narrative and the consumer can be impacted, which is really valuable. This is exactly what Store as a Medium is: intimate engagement with the consumer.

How does Store as a Medium fit into the retail omnichannel experience?

As so often happens, we were waiting for the technology to catch up to the demands of the marketplace. But now we’ve got computer vision and the ability to draw inferences; we’re looking at audiences and deriving meaningful data. How many men, how many women, how many 25-year-olds, how many 35-year-olds? (And this is not privacy data—not data that would make any of us feel creepy—but data that is relevant to a brand.) We all knew that once we cracked that code it would realistically open up the store as a valuable medium, as one of the channels in “omnichannel.” It wasn’t before, and now it is.

Now we’ve got this opportunity to drive really meaningful insights—it’s the data dividend. Not only are brands interested in delivering advertising at the point of sale, they’re interested in lift; they want to sell more stuff. And they’re interested in this unbelievably complex and robust data set that they’ve never had before, one that allows them to segment, to laser-focus, and to understand their customer engagement much more acutely than ever before.

What benefits might customers get from this situation?

If consent is not secured from the customer, there’s still a lot of very focused marketing that can be delivered to that person as a member of a group—a gender group or an age group, for example.

But when there is consent, now maybe there can be a loyalty app aligned with what’s going on in the digital display. If that consenting customer gets personalized advertising, gets choices on brands that they already have a preference for, now it can be more meaningful to them as a consumer. That’s what’s in it for the customer. Now it’s not just a general broadcast—shotgun advertising; now it’s laser focused: “Jay likes Coca-Cola more than he likes Pepsi, so I’m going to drive digital coupons.” Or “I’m going to drive a campaign promotion to him specifically because of his brand affiliation and because of his brand interests.”

What other kinds of retail use cases might there be for these digital-signage solutions?

If there’s one brand category that can afford the investment in digital infrastructure, it’s health and beauty. The margins are out of this world. It has a problem right now getting enough skilled labor to perform the educational role at the point of sale, so health and beauty can invest in the digital infrastructure and the ROI is almost immediate. The adoption that’s happening there is outpacing everything else, because of that ROI. This doesn’t necessarily mean a conflict with a grocery deployment or a big-box deployment; health and beauty can be co-resident, and they can do it together.

How is VSBLTY actually making this happen?

That is perhaps the most complex part of the business model. Generally, retail business runs on a 3%-4% gross margin. So what is the probability that most retailers are interested in a multimillion-dollar capital infrastructure investment for digital overlay? Almost zero—unless you’re Target or Walmart or one of the really big players.

The hypothesis was that if a group of us—called the Store as a Medium Consortium—could get together and solve that problem on behalf of retailers, therefore creating a media infrastructure—capitalizing it, deploying it, managing it, even doing brand-demand creation for the media network—it simplifies the retailer’s value proposition. We said, “You don’t have to do anything. We’ll open up the doors.” VSBLTY relieves the responsibility of investing in the infrastructure from the store.

Our largest deployment is in Latin America, along with Intel and Anheuser-Busch. Together, we’re building a network that will reach 50,000 stores by the end of year four. If we reach that objective—and I firmly believe we will—it will be the largest deployment of a retail-media network on the planet. And if we can do it there—in a 10-square-meter convenience store on the side of a dirt road in Guadalajara—it gives us a leg up on doing it in places with a less challenging environment.

Boston Consulting Group says this will be a $100 billion market by 2025—it’s under $5 billion today. Even if that statement is hyperbolic, we know it’s exploding. This is no longer a whiteboard exercise; it’s “We’re doing this now.”

How has your work with Intel made Store as a Medium possible?

Intel has enormous global reach. If we’re having a particularly difficult time reaching the C-suite of a retailer, Intel can get there because they have a team dedicated to ensuring thought leadership. Of course, at the end of the day, Intel wants to move silicon—and it has proven leadership in delivering powerful, high-capacity processors at the edge. But you would be surprised at the level of expertise there—subject-matter expertise, vertical expertise—and we lean on Intel all the time.

There’s also the legitimacy they give to us. We’re a side-by-side partner, and proud to be the 2022 Intel Channel Partner of the Year. Intel also has a track record of putting its money where its mouth is: When it comes time to really drive that thought leadership, Intel will always be there with us, assisting us wherever we need it. We’re enormously gratified to be in that position.

What types of technology investments does Store as a Medium require?

Everyone has a fantasy that existing infrastructure can be leveraged, therefore lowering the total capital expenditure. But generally speaking, that’s not the case. The Wi-Fi in a Target or a Kohl’s or a Walmart usually sucks. But if you’re driving new content, you need internet access, and we would have to deploy on top of the in-store Wi-Fi to get the bandwidth we’d need. Cameras and networks obviously also exist in retail for loss-prevention purposes. But those are generally up in a ceiling and looking down on heads, not directly at faces.

So for the most part, this is new build. But new build, I should hasten to add, that we’re removing the capital-expense responsibility for from the retailers. So if they deliver us a large enough number of stores, we’ll go and assemble the capital necessary to make it happen.

What can we expect from Store as a Medium going forward?

It’s no longer conjecture; we’re now looking at large-scale deployments. If you ever doubt the veracity of this category, just look to Amazon and Walmart. And if you’re in retail and you’re not afraid of what Amazon and Walmart are doing and you’re not, then you’re just not paying attention. The challenge now is speed—the speed with which adoption can be secured, deployment can be secured, and revenue can start to happen. It’s a land grab at the moment.

Anything further you’d like to leave us with?

Strap in, because your retail experience is about to change. There’s going to be more for you on your customer journey. If you decide to opt into a loyalty program, it will become profoundly more personalized. And that experience will extend to your home, if you wish it to.

The whole customer journey, that whole engagement modality, begins at bricks and mortar; it cannot begin in an online experience. So the entire experience will change, but brick and mortar is not going anywhere.

Related Content

To learn more about ongoing retail transformations, listen to the podcast Reinventing Smart Stores as a Medium: With VSBLTY and read Retail Digital Signage Gets an Upgrade with Computer Vision.

 


About BlueStar

BlueStar is the leading global distributor of solutions-based Digital Identification, Mobility, Point-of-Sale, RFID, IoT, AI, AR, M2M, Digital Signage, Networking, Blockchain, and Security technology solutions. BlueStar works exclusively with Value-Added Resellers (VARs) to provide complete solutions, custom configuration offerings, business development, and marketing support. The company brings unequaled expertise to the market, offers award-winning technical support, and is an authorized service center for a growing number of manufacturers. BlueStar is the exclusive distributor for the In-a-Box® Solutions Series, delivering hardware, software, and critical accessories all in one bundle with technology solutions across all verticals, as well as BlueStar’s Hybrid SaaS finance program to provide OPEX/subscription services for hardware, software, and service bundles. For more information, please contact BlueStar at 1-800-354-9776 or visit www.bluestarinc.com.

VARs Discover New Opportunities with Edge AI in Retail

Retailers that take advantage of transformative technologies in their physical stores gain a competitive edge. Computer vision and edge AI-backed solutions support a wide range of use cases, from self-service checkout to loss prevention, and automated restocking to personalized shopping. And behind the scenes, automated data analysis means streamlined operations and better supply chain management.

But on the other hand, many businesses are still wary of AI solutions—even though they recognize the potential benefits. And retail VARs and systems integrators are often challenged in implementing these solutions.

“There are several reasons why retailers are hesitant to adopt AI solutions, but the biggest factors by far are the lack of in-house technical skills needed to implement them—as well as plain old fear of the unknown,” says Liangyan Li, Head of Global Sales at Hanshow, a solution provider of digital store solutions for the retail sector.

There is justification for such concerns, because implementing AI in a retail setting entails significant technological hurdles. And this is where Global Solutions Distributor BlueStar, Inc. comes in—offering ready-to-deploy retail solutions—backed by service, support, logistics, and AI expertise.

It’s good news for retailers—and for retail systems integrators—as a new era of ready-to-deploy edge AI solutions has already begun. Built atop next-generation processers, and using software tools designed for edge computing, BlueStar-backed solutions offer simple, effective implementations to would-be adopters and the SIs that serve them.

Edge AI Solutions Engineered for Retail

What is the key to building solutions that meet the needs of retail businesses? The combination of industry-specific AI know-how and enterprise-tier technologies designed for ease of deployment and performance at the edge.

Hanshow’s hardware and software technology stack, combined with its experience in developing AI applications for retail, enable a flexible, user-friendly solution—and one that addresses the traditional concerns of business decision-makers in the sector. Here, Li credits Intel with helping to bring Hanshow’s solution to market.

“Intel is unmatched as a platform for stable, reliable edge computing—particularly when attempting to develop a comprehensive, seamless solution for the end user,” says Li.

Hanshow’s solution incorporates a number of different Intel technologies:

  • Intel® Core Processors handle heavy edge workloads and image processing tasks
  • Intel® Media SDK gives developers access to media workflows and video processing technologies—shortening time to market
  • The Intel® OpenVINO Toolkit speeds AI application development and helps optimize visual processing algorithms
  • Microsoft Azure Cognitive Services allows developers to build sophisticated AI algorithms even if they don’t have machine learning experience

On a practical level, Hanshow’s Intel technology-based solutions have the added benefit of being relatively easy to implement in a working environment—and can thus bring about dramatic improvements to operational efficiency in a very short time.

Smarter Shelves from Europe to Japan

Hanshow’s smart shelf management deployments in Europe and Japan are a case in point.

Despite the geographical distance, both of Hanshow’s retail customers faced similar challenges: a need to gain greater insight into what was going on in their stores to improve efficiency and boost sales.

The European business, a large supermarket chain with a global footprint, was facing frequent shortages of fresh food in its stores. The main cause of this problem was the inability of employees to identify out-of-stock (OOS) products and take steps to replenish them in a timely fashion.

The Japanese company, a large chain of department stores, was having difficulty identifying the habits and preferences of its shoppers, hampering the business’s marketing efforts.

Hanshow implemented a comprehensive AI solution at both companies. In the supermarkets, it used computer vision cameras to take images of fresh food stacks to provide near real-time data on stock. In the department stores, the company implemented a digital shelf solution that encompassed marketing, OOS management, human-product interaction, customer demand analysis, and smart advertising.

The results were dramatic. The supermarkets saw their average OOS duration drop from 2.5 hours to 1.5 hours—a 40% improvement—while also eliminating the need for employees to perform daily manual inspections. The department store chain, for its part, saw an immediate effect on sales: an increase of nearly 20% in sales of active products when single-product recommendations were implemented in digital shelf areas.

The Transformation of Global Retail

The promise of AI in the retail sector is not new. But the emergence of comprehensive, easy-to-deploy solutions will turn that promise into a reality.

It’s hard to overstate the effect this will have—especially as adoption increases, and systems integrators and technology companies begin to develop the retail AI ecosystem in earnest. Expect to see more complex computing workloads, multi-architecture applications, and new benchmarks for operational efficiency and consumer experience.

This is why Li talks in terms of the “transformation of the global retail market.”

“AI helps retailers provide consumers with more personalized services, accelerates business operations and commodity circulation, and delivers more valuable data insights,” he says. “It will allow retailers to reshape the relationship between people, products, and markets.”

And BlueStar is the key for VARs and systems integrators to grow their businesses, as part of this global retail market transformation.

 


About BlueStar

BlueStar is the leading global distributor of solutions-based Digital Identification, Mobility, Point-of-Sale, RFID, IoT, AI, AR, M2M, Digital Signage, Networking, Blockchain, and Security technology solutions. BlueStar works exclusively with Value-Added Resellers (VARs) to provide complete solutions, custom configuration offerings, business development, and marketing support. The company brings unequaled expertise to the market, offers award-winning technical support, and is an authorized service center for a growing number of manufacturers. BlueStar is the exclusive distributor for the In-a-Box® Solutions Series, delivering hardware, software, and critical accessories all in one bundle with technology solutions across all verticals, as well as BlueStar’s Hybrid SaaS finance program to provide OPEX/subscription services for hardware, software, and service bundles. For more information, please contact BlueStar at 1-800-354-9776 or visit www.bluestarinc.com.

Low-Code AI Eases Computer Vision Application Development

Identifying potholes along thousands of miles of roadway. Stocking shelves and rearranging inventory. Spotting minuscule product defects that a factory inspector might miss. These are just a few of the things today’s AI and computer vision systems can do. As capabilities improve and costs decrease, adoption is rapidly expanding across industries.

Once in place, a computer vision system can save humans countless hours of toil, as well as reduce errors and improve safety. But developing a solution can be painstaking and time-consuming. Humans often play an outsize role in training AI algorithms to distinguish a Coke can from a water bottle, or a shadow from a break in the asphalt. But as the technology evolves, solution providers are finding new ways to make training more efficient and creating systems easier for nontechnical users to operate.

Solving Problems with Computer Vision and Edge AI Technology

Computer vision applications are as varied as the industries and organizations they serve, but they share two common goals. The first is saving time and money by automating tedious manual tasks with machine learning. The second is creating a growing repository of knowledge from large amounts of data that will shed light on operations and lead to further improvements over time.

“We start with a base system, then we work with our clients to specialize it for their needs,” says Paul Baclace, Chief AI Architect at ICURO, a company that builds AI and computer vision solutions for deployment on robots, drones, and in the cloud.

For example, for the U.S. Department of Transportation, ICURO created a successful proof-of-concept drone that uses computer vision cameras to detect and relay information about road cracks and other highway defects in real time. Normally, a drone’s camera images aren’t processed until after the flight.

“When you check the images later, some may be blurry, or the contrast might be terrible. Then you have to go back and redo them, and that’s very expensive. By processing them in real time, you have fewer errors,” Baclace says.

To save warehouse and retail workers time and labor, ICURO developed the Mobile Robot AI Platform. It navigates to specified objects, grabs them, and loads them onto transport robots for packing and shipping—all without human intervention. The robot can also integrate with factory machines and sensors to detect and resolve production problems. “It has a lower error rate than humans, who can get tired and injured,” Baclace explains.

The robot uses Intel® RealSense cameras and lidar—light detection and ranging—to navigate. Another RealSense camera, enclosed in its “hand,” enables it to grasp the correct item and load it into a basket before heading off to its next job (Video 1).

Video 1. The ICURO mobile picking robot uses Intel® RealSense cameras and lidar to navigate to specified items, grasp them, and deliver them to a transport robot for packing and shipping. (Source: ICURO)

As companies become more comfortable using automation, computer vision solutions are expanding—and becoming more visible. For example, ICURO created a picking robot for a cashierless retail store that gathers customers’ shopping list items from a storeroom and delivers them to the front counter.

As companies become more comfortable using automation, #ComputerVision solutions are expanding—and becoming more visible. @icuro_ai via @insightdottech

Creating Cutting-Edge Computer Vision Solutions

To develop its robot-controlling computer vision applications, ICURO programs and tests them in the Intel® Developer Cloud and uses the Intel® OpenVINO toolkit to optimize them for best performance.

“Without Intel’s tools, we could look at the specs we need and estimate, but there would be some guesswork involved. This way, we can check the performance and say, ‘OK, that’s what we need to put on this robot,’” says Baclace.

ICURO doesn’t make hardware, but Intel software tools help the company determine which devices would work best for its mobile software applications. Most can run on compact and lightweight edge CPUs, such as the Intel® NUC.

Faster Deployment and No-Code Operation

Before computer vision solutions can be implemented, their algorithms must be trained to recognize customer images, which can range from stop signs, vehicles, and pedestrians, to different goods with similar-sized packaging. Usually, much of the training is done by humans, who use online tools to outline and label images of all of the objects a robot might encounter. After all the images have been annotated, they are fed to the algorithms, whose performance is tested, corrected, and validated before deployment.

To speed up this painstaking process, ICURO experiments with a newer method known as active learning, in which each image is annotated and fed to algorithms right away. If they interpret it correctly, a domain expert can mark the image as validated, which adds to a growing database that guides the algorithms in making future decisions. The learn-as-you-go method speeds training and saves personnel from doing annotations that may be unnecessary. “With the push of a button, you increase the dataset. Training and feedback go from days to minutes,” Baclace says.

In addition, ICURO is working on solutions that will allow its customers to make changes to their computer vision models, training the software to recognize new products or new locations without having to write code. The company also regularly hones its algorithms to maintain a competitive edge in the fast-moving world of AI and computer vision.

“Neural networks keep changing and improving their accuracy every six months to a year, and we like to use the latest ones,” Baclace says. “This is a very exciting time for deep learning systems.”

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.