AI Unlocks Supply Chain Logistics

Most of us think of the supply chain as a single colossal entity—the Ever Given cargo ship stuck in the Suez Canal comes readily to mind—when it is really made up of many elements. Logistics, the minutia of moving products, comprises a significant component of the supply chain.

Headaches in shipping logistics are aplenty. For example, packages might be delayed or never reach their final destination for a variety of reasons. In 1990, more than 20 containers of Nike shoes fell off a cargo ship traveling from South Korea to the United States. But in the real-life trenches of shipping logistics, packages get delayed for reasons that are a lot less splashy. A smudged barcode or a label that simply falls off a package can hold up delivery—and these seemingly trivial reasons add up to a major headache for enterprises.

The Automation Challenge in Supply Chain Analytics

“The biggest strains on today’s logistics are related to package volume and velocity,” says John Dwinell, Founder and CEO of Siena Analytics, a company that delivers supply chain AI and image recognition for high-volume logistics.

A surge in the number of packages passing through distribution centers and warehouses is especially challenging, because it comes at a time when consumer demand for speedy delivery has also increased. Nearly a third of U.S. consumers have higher expectations for faster shipping since the beginning of the pandemic, according to a 2021 survey.

These dual factors, coupled with ongoing labor shortages, make the case for automation and digital transformation in distribution centers and warehouses. “Unfortunately, attempts at automation are hampered by package quality problems,” Dwinell says.

For example, barcodes that need to be read might be hidden under plastic or missing altogether. Poor quality leads to inconsistencies, which makes automation more difficult. In an automated distribution center, problem packages get routed to a “hospital lane” where workers have to diagnose and troubleshoot the issue. These hiccups tie up valuable worker resources and lose valuable time, neither of which enterprises can afford.

“Siena is leveraging the strengths that #AI algorithms bring to a common set of #logistics problems.” – John Dwinell, @SienaAnalytics via @insightdottech

Solving Automation Challenges With AI

Siena Analytics tackles everything from parcel quality-related obstacles to logistics automation by using sensors in scanning tunnels. Cameras capture images of parcels as they enter and travel around the distribution center. By using AI models to analyze the pictures, the platform troubleshoots problems in real time and delivers long-term parcel intelligence that enterprises can act on.

At the edge, the Siena Analytics solution automates troubleshooting tasks that might otherwise have been moved to the hospital lane. For example, an oversized package might need a specific type of shipping label. Sensors can identify the product size and alert a machine to print the right label. Similarly, if a label falls off, cameras can identify the package through some other distinguishing feature, track the parcel in its cache of earlier photographs, and generate a new label.

AI can also deliver pattern intelligence to detect inconsistencies faster. For example, a vendor found to consistently mislabel images can be trained on a vendor compliance program. Package intelligence can feed a digital twin of the distribution center, which gives visibility at scale. Enterprises can pinpoint bottlenecks in specific machines and sorting tunnels more easily and set up the system to issue alerts when necessary. “You have smarter insights into the good, the bad, and the ugly of what’s happening in the building,” Dwinell says.

Low-Code Platform Eases AI Training

Siena helps companies set up their own AI-driven package intelligence solutions on the back of its low-code Siena Insights platform. Enterprises, Dwinell points out, understand their domain well but lack the tools and know-how to capture the right data for insights. Siena has taken the really intimidating parts of AI and automated those workflows.

Company experts simply lean on their domain expertise to label the data from images that the Siena solution captures, and train and build custom AI solutions. “We have a platform that allows you to train AI models without having to be an expert data scientist yourself,” Dwinell says.

Siena Analytics relies on the Intel® Edge industrial platform to orchestrate the volumes of data and the Intel® OpenVINO Toolkit, “that can adapt to whatever hardware is available,” says Dwinell. “OpenVINO allows us to have a common and scalable efficient platform for inferences at the edge.”

By starting with simple data analytics solutions, Siena helps customers bridge the OT-IT divide. Because its solution delivers immediate results that boost the bottom line, systems integrators can make the case for its use to the C-suite. “SIs are particularly impressed by data velocity and our ability to show real results,” Dwinell says.

The Future of Supply Chain Logistics

Expect more standardization of processes—labeling, information in barcodes—in the near future. Robotics-based solutions are also going to mature and be more usable in warehouses, Dwinell predicts. AI is a transformative technology, which will continue to reshape logistics. Already, it is streamlining processes and ironing out inefficiencies in the wider supply chain.

Too often, package information comes from the shippers and does not always align with reality. An AI-based solution like Siena Insights flips that approach on its head. Discrepancies between what shippers say they have done and what the packages actually show can easily be found and fixed. When data comes from the sensors, it’s live and uncontested it’s real. It can be matched and corrected.

“Siena is leveraging the strengths that AI algorithms bring to a common set of logistics problems,” Dwinell says. And when every second in the distribution center counts, every bottleneck resolved is a logistics win.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Intel® Vision 2022: Your Roadmap for Edge AI Innovation

From healthcare to retail, factories to utilities, ubiquitous compute connected from edge to cloud has every industry on “the precipice of a digital renaissance.” Speaking from Intel® Vision 2022, CEO Pat Gelsinger explained that the renaissance has commenced with the proliferation of AI.

“Every business is becoming a technology business,” he said. “If you’re not applying AI to every one of your business processes, you’re falling behind.”

While Vision 2022 concluded on May 11, there’s still time to join thousands of business and technology stakeholders staying ahead of the digital transformation curve: Select content from the event is available on demand now for free when you register for a digital pass.

Registering for a digital pass will get you access to each day’s keynote session, which included unveiling:

In addition, you’ll see conference sessions that reveal how Intel and its partners use pervasive connectivity, computing, and edge-to-cloud infrastructure to help users improve business outcomes through AI at the edge. Full of technical strategies and hands-on demonstrations, they offer practical implementations of what Gelsinger describes as an “insatiable demand for compute and this increasing drive to deploy… AI inference solutions.”

Here’s a session guide to help make the most of your streaming experience.

Enable Ubiquitous Compute for AI at the Edge with OpenVINO

Computer vision applications have undoubtedly been the first desirable apps for AI at the edge. So you shouldn’t be surprised to find the Intel® OpenVINO toolkit at the center of many of the on-demand presentations.

“Every business is becoming a #technology business… if you’re not applying #AI to every one of your business processes, you’re falling behind.” – @PGelsinger, @intel via @insightdottech

For instance, in “Accelerating the Deployment of Visual AI at the Edge,” experts from Splunk and Scalers AI show how the toolkit is leveraged in the new Intel Video AI Box to bring real-time video analytics to traffic analysis, quality assurance, retail digital signage, and other use cases.

You can also watch how TIBCO paired the AI model optimizer with other open-source components like the EdgeX interoperability framework in Project Air—which uses the stack to provision IoT devices like multi-stream CV cameras with minimal code.

Plenty of Intel partner demos are also available for viewing in the virtual Ecosystem Technology Showcase. One worth checking out is CR2O’s demonstration of ENTERA, a hyper-scalable, “privacy-aware” video analytics AI-SaaS platform rooted in OpenVINO.

Scale Machine Intelligence from Cloud to Edge and Back

But AI at the edge is only half of the OpenVINO story. In fact, it’s only half of the AI story. Distributing AI across the edge-to-cloud continuum requires a complement of enabling technologies that transcend what any one company or technology can achieve on its own.

Discover collaborations that deliver verticalized end-to-end infrastructure in sessions like “Breaking Down the Data Deluge in Healthcare Using the Intelligent Edge,” where a panel from Intel, Medtronic, and Caresyntax discuss how 5G, IoT, and AI are converging to redefine the medical field.

Another, “Unlock the Potential of Robotics in Retail, Industrial & Hospitality,” highlights a compute and connectivity partnership between AAEON Electronics and Siasun Robotics that’s driving rapid deployment of fixed, mobile, and collaborative robots.

Other can’t-miss sessions on edge-clouds for industry include digital transformation leader Capgemini’sHow Will You Build Smart Cities Infrastructure?” and “Grid Modernization and Sustainability,” presented by Prith Khajuria, Intel’s Director of Energy and Sustainability, who introduces an AI-powered foundation for bidirectional power grids of the future.

You can use the filter function when browsing the session catalog to find industry-specific content that fits your business needs.

Tie It All Together with Pervasive Connectivity

Of course, none of this is possible without persistent and pervasive connectivity that ties the training and intelligence of the cloud to endpoints and inferencing platforms at the edge.

Leading Industrial IoT organizations Red Hat, Audi, and Georgia Pacific share their experiences successfully implementing secure private networks in “The Industrial Edge: Digital Transformation Journeys at the Nexus of Compute & Connectivity.”

And Nokia VP Christopher Jones continues the trend by discussing how private, on-premises wireless edge networks help accelerate the deployment connected of operational technology in enterprise, government, and municipalities with “Creating Business Value Through Industry 4.0 Digitalization and 5G.”

Four Superpowers Drive Innovation: What’s Yours?

Together, Gelsinger sees ubiquitous compute, pervasive connectivity, cloud-to-edge infrastructure, and paradigm-shifting AI as the “four superpowers” of innovation. And they come at a “strategic inflection point, a moment in time where things can go incredibly well or incredibly poorly.”

The sessions and demonstrations outlined above are just a fraction of the Vision 2022 content available free on demand to help you chart the right course for your digital transformation journey. And you better get going, because as Gelsinger noted, “Transformation is inevitable; it applies to all.”

Start shaping the inflection curve today by registering for the Intel® Vision 2022 digital pass.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

The Future of Access Control with IoT Security

Protection from internal and external security threats can feel like a never-ending challenge. The complexities of deploying safety measures to prevent unauthorized access across dynamic environments with a diversity of platforms can be costly and labor-intensive.

As a result, organizations look to install stronger access control that can seamlessly integrate with existing systems and processes, and support greater operational efficiency. And deploying IoT security solutions make it possible.

One example is Vienna International Airport—the biggest aviation hub for travel between Central and Eastern Europe—serving up to 31 million passengers annually. Like most airports, Vienna International goes through continual expansion, construction, and maintenance. And with approximately 80,000 employees, subcontractors, and annual visitors, secure and centralized access control becomes essential.

The airport needed to upgrade its outdated system over a one-year transition period—without disrupting operations and workflows—not an easy undertaking. To address these challenges, the airport operations team turned to FAST Systems, a pioneer in the field of digital safety and security technologies.

“The task was to provide a solution that could manage the access control for 30,000 active identities, including internal airport employees and external personnel doing construction work or services,” says Carsten Tschritter, CTO at FAST Systems. “Our automation methodology involved aligning with processes and workflows, and facilitating the automation of most access security elements without interrupting airport operations.”

To accomplish this, the company deployed its Flow4Secure Process Automation and Workflow Management platform. This provided a customized and fully integrated Identity and Access Management Solution (IDMS) that complies with EU General Data Protection Regulation (GDPR) and Aviation Security Regulations.

AI-Powered Access Control

The AI-powered platform Flow4Secure was developed after FAST Systems saw organizations struggling with a range of security issues. Most prominently: bringing multiple access control operating systems and sensitive personal data together.

Managing identities for an airport requires insights into tens of thousands of personal data records and control of thousands of doors across the facilities. This challenge can overwhelm users and make it extremely difficult to maintain regulatory compliance and internal Standard Operation Procedures (SOPs).

The ability to manage three different Access Control Systems (ACS) through one middleware application enables security operators to use workflow-driven applications to process ID cards and vehicle badges with automated assignment to external companies and internal departments. The solution automatically allocates access areas during the application process and activates access rights when issuing ID cards in all connected ACS.

“It was clear we wanted to have a vendor-agnostic integration platform that connects all the unconnected systems,” Tschritter explains. “By merging all Access Control Systems into one platform, Vienna airport is able to use and analyze personal information to provide better security by applying efficient and well-defined workflows.”

Flow4Secure makes it easy to see all the relevant information about identities and ACS #data points in one place to quickly detect and address any issues. FAST Systems AG via @insightdottech

The core idea behind the solution is to simplify operations for the end user, external subcontractors, and visitors by offering intuitively usable web portals, the control elements, which are adapted to the tasks of the respective user groups. With its simplified user interface, Flow4Secure makes it easy to see all the relevant information about identities and ACS data points in one place to quickly detect and address any issues.

The Flow4Secure system collects, combines, and correlates data from various disparate systems and devices such as card readers, printers, scanners, cameras, and more. The open and scalable system architecture allows continuous development of new applications supporting Vienna airport’s desire to aggregate and digitize all operations going forward.

“It’s a tough job to manage companies, organization structures, orders, personal data, and vehicles in a hierarchy, which requires defining the allocation of access rights,” says Tschritter. “Our task was to provide a user-friendly and process-driven interface that included all necessary information to ID office staff for 100% controlled operations.”

In the case of the airport, the Flow4Secure solution provides a high-availability platform with staging for production, testing, training, and integration purposes. The integration of Access Control and all relevant systems can run in a test environment before uploading it to productive systems. 

The Power of Partnerships Uplevels Security Standards

All of this is made possible through the company’s powerful partnerships with Intel® and Dell. “The relationship with Dell started when Intel was conducting a trial with a railway station in Berlin,” says Bernd Drescher, Vice President of Sales at FAST Systems.“The Dell team was quite enthusiastic to see how we were able to address this customer’s unique requirements. Now we are in the final process of setting up new Dell appliances that integrate the Flow4Secure for process-driven solutions like Yard Management, Asset Tracking, and Visitor Management.”

On the technical side, of course, it’s essential to have a reliable hardware platform. “For us, it comes from this constellation of working together with Dell on the graphic side, our 3D GIS map system,” adds Carsten. “And on the CPU side, it’s Intel, where we have access to R&D and software performance testing resources as needed.”

The Future of Access Security

Having integrated security systems like Flow4Secure—built on Dell, powered by Intel—will become more important as IoT adoption continues, Drescher explains: “The needs for integration platforms are growing at the same pace as the number of connected devices.”

Organizations need to move toward access control system integration and cross-platform interoperability if they want to be able to monitor and improve their operations easily and effectively.

Tschritter predicts that with the rise of 5G, higher bandwidth, and the edge, this will become only a bigger advantage in the future.

 

This article was edited by Leila Escandar, Editorial Strategist for insight.tech.

This article was originally published on June 2, 2022.

The AI Journey: Why You Should Pack OpenShift and OpenVINO™

AI can be an intimidating field to get into, and there is a lot that goes into deploying an AI application. But if you don’t choose the right tools, it can be even more difficult than it needs to be. Luckily, the work that Intel® and Red Hat are doing is easing the burden for businesses and developers.

We’ll talk about some of the right ways to deploy AI apps with experts Audrey Reznik, Senior Principal Software Engineer for the enterprise open-source software solution provider Red Hat, and Ryan Loney, Product Manager for OpenVINO Developer Tools at Intel®. They’ll discuss machine learning and natural language processing; using the OpenVINO AI toolkit with Red Hat OpenShift; and the life cycle of an AI intelligent application.

Why are AI and machine learning becoming vital tools for businesses?

Ryan Loney: Everything today has some intelligence embedded into it. So AI is being integrated into every industry—industrial, healthcare, agriculture, retail. They’re all starting to leverage the software and the algorithms for improving efficiency. And we’re only at the beginning of this era of using automation and intelligence in applications.

We’re also seeing a lot of companies—Intel partners—who are starting to leverage these tools to assist humans in doing their jobs. For example, a technician analyzing an X-ray scan or an ultrasound. And, in factories, using cameras to detect if there’s something wrong, then flagging it and having somebody review it.

And we’ve started to even optimize workloads for speech synthesis, for natural language processing, which is a new area for OpenVINO. If you go to an ATM machine and have it read your bank balance back to you out loud, that’s something that’s starting to leverage AI. It’s really embedded in everything we do.

How is AI and ML started to be more broadly adopted across industries?

Audrey Reznik: When we look at how AI and ML can be deployed across the industry, we have to look at two scenarios.

Sometimes there’s a lot of data gravity involved in an environment and data cannot be moved off-prem into the cloud, such as with defense systems or government—they prefer to have their data on-prem. So we see a lot of AI/ML deployed that way. Typically, people are looking to a platform that will have MLOps capability, and they’re looking for something that’s going to help them with data engineering, with model development, training/testing the deployment, and then monitoring the model.

If there aren’t particular data security issues, they tend to move a lot of their MLOps creation and delivery/deployment to the cloud. In that case they’re going to look for a cloud service platform that has MLOps available so that they can look at, again, curating their data, creating models, training and testing them, deploying them, and monitoring and retraining those models.

“The advent of #OpenVINO changed the paradigm in terms of optimizing a #model, and in terms of quantization.” – Audrey Reznik, @RedHat via @insightdottech

In both instances what people are really looking for is something easy to use—a platform that’s easy for data scientists, data engineers, and application developers to use so that they can collaborate. And the collaboration then drives some of the innovation.

Increasingly, we’re seeing people use both scenarios, so we have what we call a hybrid cloud situation, or a hybrid platform.

What are some of the biggest challenges with deploying AI apps?

Ryan Loney: One of the biggest challenges is access to data. When you’re thinking about creating or training a model for an intelligent application you need a lot of data. And you have to factor in having a secure enclave where you can get that data and train that data. You can’t necessarily send it to a public cloud—or if you do, you need to do it in a way that’s secure.

That’s one of the things I’m really impressed with from Red Hat and from OpenShift is their approach to the hybrid cloud. You can have on-prem managed OpenShift or you can run it in a public cloud—and still really give the customer the ability to keep their data where they want to keep it in order to address security and privacy concerns.

Another challenge for many businesses is that when they’re trying to scale, they have to have an infrastructure that can increase exponentially when it needs to. That’s really where I think Red Hat comes in—offering this managed service so that they can focus on getting the developers and data scientists access to the tools that they would use on their own outside of the enterprise environment, and making it just as easy to use inside the enterprise environment.

Let’s talk about the changes that went into the OpenVINO 2022.1 release.

Ryan Loney: This was the most substantial change of features since we started in 2018, and it was really driven by customer needs. One key change is that we added hardware plugins, or device plugins. We’ve also recently launched discrete graphics. So GPUs can be used for deep-learning inference. Customers need them for things like automatic batching, and they can just let OpenVINO automatically determine the batch size for them.

We’ve also started to expand to natural language processing, as I mentioned before. So if you ask a chatbot a question: “What is my bank balance?” And then you ask it a second question: “How do I open an account?” both of those questions have different sizes—the number of letters and number of words in the sentence. OpenVINO can handle that under the hood and automatically adjust the input.

What has been Red Hat’s experience using OpenVINO?

Audrey Reznik: Before OpenVINO came along, a lot of processing would have been done on hardware, which can be expensive. The advent of OpenVINO changed the paradigm in terms of optimizing a model, and in terms of quantization.

I’ll speak to optimization first. Why use a GPU if you can say, “You know what? I don’t need all the different frames in this video in order to get an idea of what my model may be looking at.” Maybe my model is looking at a pipe in the field and we’re just checking to make sure that nothing is wrong with it. Why not just reduce some of those frames without impacting the ability of your model to perform? With OpenVINO, you can add just a couple of little snippets of code to get this benefit, and not use the hardware

The other thing is quantization. With machine learning models there may be a lot of numerics in the calculations. I’m going to take the most famous number that most people know about—pi. It’s not really 3.14; it’s 3.14 and many digits beyond that. Well, what if you don’t need all that precision? What if you can be just as happy with the one value that most people equate with pi—that 3.14?

You can gain a lot of benefit for your model, because you’re still getting the same results, but you don’t have to worry about cranking out all those digit points as you go along.

For customers, this is huge because, again, we’re just adding a couple of lines of code with OpenVINO. And if they don’t have to get a GPU, it’s a nice, easy way to save on that hardware expense but get the same benefits.

What does an AI journey really entail from start to finish?

Audrey Reznik: There are a couple of very important steps. First we want to gather and prepare the data. Then develop the model or models, and integrate the models in application development. Next, model monitoring and management. Finally, retraining the models.

On top of the basic infrastructure, we have our Red Hat managed cloud services, which are going to help take any machine learning model all the way from gathering and preparing data—where you could use our streaming services for time-series data—to developing a model—where we have the OpenShift data service application or platform—and then to deploying that model using source-to-image. And then model monitoring and management with Red Hat OpenShift API management.

We also added in some customer-managed software, and this is where OpenVINO comes in. Again, we can develop our model, but this time we may use Intel’s oneAPI AI analytics toolkit. And if we wanted to integrate the models in app development, we could use something like OpenVINO.

And at Red Hat, we want to be able to use services and applications that other companies have already created—we don’t want to reinvent everything. For each part of the model life cycle we’ve invited various independent service vendors to come in and join this platform—a lot of open source companies have created really fantastic applications and pieces of software that will fit each step of the cycle.

The idea is that we invite all these open-source products into our platform so that people have choice—they can pick whichever solution works better for them in order to solve the particular problem they’re working on.

Ryan, how does OpenVINO work with Red Hat OpenShift?

Ryan Loney: OpenShift provides this great operator framework for us to just directly integrate OpenVINO and make it accessible through this graphical interface. Once I have an OpenVINO operator installed, I can create what’s called a model server. It takes the model or models that my data scientists have trained and optimized with OpenVINO, and gives out an API endpoint that you can connect to from your applications in OpenShift.

The way the deployment works is use what’s called a model repository. Once the data scientists and the developer have the model ready to deploy, they can just drop it into a storage bucket and create this repository. And then every time an instance or a pod is created, it can quickly pull the model down so you can scale up.

Even if you don’t perform the quantization that Audrey mentioned earlier, OpenVINO does some things under the hood—like operation fusion and convolutions fusion—things that give you performance boost, reduce the latency, increase the throughput, but don’t impact accuracy. These are some of the reasons why our customers are using OpenVINO: to squeeze out a little bit more performance, and also reduce the resource consumption compared to just deploying with deep learning.

What’s best way to get started on a successful AI journey?

Audrey Reznik: One of my colleagues wrote an article that said the best data science environment to work on isn’t your laptop. He was alluding to the fact that when they first start out, usually what data scientists will do is put everything on their laptops. It’s very easy to access; they can load whatever they want to; they know that their environment isn’t going to change.

But they’re not looking towards the future: How do you scale something on a laptop? How do you share that something on the laptop? How do you upgrade?

But when you have a base environment, something everybody is using, it’s very easy to upgrade that environment, to increase the memory, increase the CPU resources being used, add another managed service. You also have something that’s reproducible. And that’s all key, because you want to be able to take whatever you’ve created and then be able to deploy it successfully.

So if you’re going to start your AI journey, please try to find a platform. Something that will allow you to explore your data, to develop, train, deploy, and retrain your model. Something that will allow you to work with your application engineers. You want to be able to do all those steps very easily—without using chewing gum and duct tape in order to get to production.

Related Content

To learn more about AI and the latest OpenVINO release, read AI Developers Innovate with Intel® OpenVINO 2022.1 and listen to Deploy AI Apps with Intel® OpenVINO and Red Hat. Keep up with the latest innovations from Intel and Red Hat, by following them on Twitter at @Inteliot and @RedHat, and on LinkedIn at Intel-Internet-of-Things and Red-Hat.

 This article was edited by Erin Noble, copy editor.

The Power of Omnichannel Experiences with meldCX and Intel®

Listen on:

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Customer interactions have gone digital. Whether shopping online, ordering food, or checking into a hotel—people expect the same level of convenience online or in person. This creates pressures for retailers to implement new technologies and transform physical spaces. But if done correctly, it could have huge benefits beyond the customer experience.

For instance, imagine if retailers could use new digital solutions in stores to track and predict every touchpoint in the customer journey just as they would online? With companies like meldCX and Intel®, it’s becoming more and more possible. In this podcast, we talk about the evolution of customer experiences, what retailers can do to meld physical and virtual stores together, and what a successful omnichannel experience looks like.

Our Guests: meldCX and Intel®

Our guests this episode are Stephen Borg, Co-Founder and CEO of AI technology company meldCX, and Chris O’Malley, Director of Marketing for the Internet of Things Group at Intel®.

At meldCX, Stephen works with businesses to create premier customer experiences powered by AI and at the edge. Previously, he was CEO for device manufacturer AOPEN, where he remains a board member.

Chris, who has been with Intel for more than 20 years, focuses on technology and solutions used in retail, banking, hospitality, and entertainment.

Podcast Topics

Stephen and Chris answer our questions about:

  • (2:54) The evolution of customer experiences
  • (5:28) How retailers are adapting to these changes
  • (7:33) Top retail pain points when working with new technologies
  • (12:51) What a successful retail omnichannel looks like
  • (14 47) How to gain more value from your business
  • (19:17) Making sense of available retail data
  • (21:22) The importance of a partner ecosystem
  • (26:54) Future-proofing your technology investments

Related Content

For the latest innovations from meldCX, follow them on LinkedIn.

 

This podcast was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

 

Transcript

Christina Cardoza: Hello, and welcome to the IoT chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Associate Editorial Director of insight.tech. And today we’re talking about omnichannel customer experiences with Stephen Borg from meldCX, and Chris O’Malley from Intel. Hey guys, thanks for joining the podcast today.

Chris O’Malley: Thank you. How are you doing today?

Christina Cardoza: Great. Great.

Stephen Borg: Thank you. Thanks for having me.

Christina Cardoza: Yeah, of course. Before we get started, I think our listeners would love to hear a little bit more about you, and who we’re about to speak to. So Stephen, I’ll start with you. What can you tell us about yourself and why you started meldCX?

Stephen Borg: I’m the Co-Founder and CEO of meldCX. I—at the time I was working for a few large groups consulting in this area, and we actually designed meldCX on a napkin seven years ago, but the tech was just not there to build it. So we decided around five years ago, when we saw some of the tech emerging that was relevant to us to really start the process. We really built it to—and  that’s what it stands for—is to take legacy and meld it with current technology and create great experiences. And that’s really what we do. We realize our customers have that—some legacy debt that they have to deal with, and we take that, take that information and bring it out and really create an experience for customers.

Christina Cardoza And Chris, welcome back to the podcast. For our listeners, why don’t you refresh their memories, what you’re up to at Intel these days.

Chris O’Malley: Sure. So my name is Chris O’Malley. As Christina mentioned, I’m at the Intel Corporation. I am a Marketing Director for the Internet of Things group. I focus on essentially the technology used in retail, hospitality, banking, and entertainment segments. Primarily, you know, we’re thinking of, how does technology drive experiences in stores? And our whole goal is to support technologies like Stephen’s company, meldCX, works on.

Christina Cardoza: Great. And I should mention that insight.tech, the program and the IoT Chat podcast, are owned by Intel. So it’s great to have somebody from the company representing this conversation today.

Stephen, I love how you mentioned in your intro that when you started the company, that technology wasn’t there, but now we’re seeing the technology rapidly advance today. And in addition to that, over the last couple of years these customer experiences in retail, hospitality—all of these different industries have just completely changed, and have had to change. But, you know, it’s great that we have this technology to be able to do that. Not everybody knows how to utilize the technology or how to change—what’s the right change for them. So why don’t you start by telling us a little bit about what you’re seeing in this customer experience, how you were seeing this evolve across different industries.

Stephen Borg: It’s interesting. I think the whole COVID situation, multiple lockdowns, has really accelerated the curve, right? We’ve had customers and we talk to them, and we still got the same issues where they need to increase the level of service without as many resources, either being budgetary, or they can’t get access to the correct staff. So you’ve got this expectation of needs increasing, while you have less ability to facilitate those customers. So what we’re finding is that when customers do venture out—and this is feedback from our customers or our key enterprise customers—when their customers do venture out, they expect a higher degree of service. They expect the demonstration of cleanliness, right? That they’re respecting the situation and following process. And they also expect a higher degree of engagement.

So how do you do that while either reducing costs or with less resources? And we have customers that can’t even hire to the needs they have to facilitate. So what we’re seeing is that they’re starting to turn to technology that takes the transactional elements or these elements that use up resources that are not customer facing, and redirecting those resources to creating great experiences. And we’re seeing that in hotels—from seamless check-in and check-out; in grocery, transactional items that slow customers down in either self-checkout or in actual checkout—you know, things like processing fresh produce. And we’re seeing that really across the board: what are the opportunities to reduce friction, create automation, but increase engagement?

Christina Cardoza: You mentioned the pandemic, how that sort of accelerated things for businesses. And I imagine a lot of these businesses were forced into a digital transformation they weren’t quite prepared for. So they’ve had to make a lot of changes on the fly to stay competitive. And now that we’ve had some time to look back at those changes that were made, Chris, I’m wondering from your perspective how well have these industries, like Stephen just mentioned—retail, grocery stores, hospitality—how well have they been dealing with the changes that need to be happening and taking on this digital transformation?

Chris O’Malley: You know what? It’s a, it’s a mixed bag, to be honest. You know, all the trends or the challenges that retail was facing prior to COVID, they still exist. You know, three years ago we were talking about frictionless—the millennials, digital natives, have been growing. They don’t like to talk to people. So it’s been self-checkout, wayfinding key, that type of thing—engaging with technology as opposed to humans. There’s been inventory and supply challenges. There’s been this—you know, there’s been some increasing theft, there’s been the need for inventory. But the reality is they were, we were, kind of at the slow level of growth. You know, it was nice to have this technology, but it wasn’t absolutely necessary.

What we found is the people that, or the companies that, started to invest in this type of technology prior to COVID, now that COVID’s hit us the ones who invested previously are doing really well. The ones who don’t are trying to build this entire technology structure without any previous investment. And they’re struggling greatly with it. You know, the biggest thing Stephen mentioned—it’s amplified or accelerated the trends. That’s what we’re seeing. It’s absolutely accelerated. And I think it’s because of this worldwide labor shortage. There’s a lot of jobs that they literally cannot hire for, or if they can hire for it, they have to hire at a wage rate that quite frankly they can’t sustain profitably. So they’re looking at, how can I automate, how can I use insights of computer vision to give people the experience they want?

After, you know, during COVID, many companies went in with almost no one-to-one digital contact with their customers. You know, for two years we did online ordering, we did mobile ordering, we did curbside pickup. So companies now have this massive relationship with customers digitally. If they don’t know how to deal with that data, if they don’t know how to personalize with that data, they’re really struggling. So we’re really seeing the companies that invested prior to COVID taking off, and the other ones are playing come-from-behind, really struggling to put the IT infrastructure in place, how to use computer vision and things like that, to make it really valuable.

Christina Cardoza: Stephen, I imagine you’re working very closely with a lot of these businesses that are struggling with these things, like infrastructure or to adapt to these changes. What have been the top pain points or challenges that you’ve been seeing, and how can they address these now going forward? Or how can they implement these new technologies and work with meldCX to go down a better path?

Stephen Borg: Yeah, I think there’s a few areas. One, when we started out with meldCX, I mentioned earlier that we took a little bit of a pause and made sure the technology was available. The reason for that is when we went out with meldCX, we wanted to create a solution out of the box where you could simply plug and play. You don’t need data scientists, you don’t need a massive team to stand up what we see as the most common aspects of computer vision. So, analytics tracking, inventory, those types of things you can just plug and play out of the box and get going. So that was one of the first things we wanted to do.

And then, secondly, we wanted to create a method where you can take—and we found a lot of customers in this state where they had to furlough some of their team members during the pandemic and couldn’t get them back—so we found a lot of customers that were, say three-quarters through a project where they had existing investment in some models. You can take meld and pump those models down meld, because we use OpenVINO, and mix them with other models we have to complement. So we’ve got this thing called a mixer, and it blends models together and gives you an outcome.

And then, thirdly, we actually created a service that if you have a specific use case or a problem you’re trying to solve, we can go ahead and create that model for you. We have a synthetic data lab because we face the same issues of getting content or getting the right video to create these scenarios. So we have a synthetic data lab.

So what we’re finding is that now that the level of engagement is cross business, customers are very invested, and we’re finding that we have—unlike the past, where we might have an IT stakeholder or a marketing stakeholder—we have everyone at the table, because they see the benefit and they really drill down into understanding what they need. So we sort of advise customers to start with computer vision, start with out-of-the-box modules and then go from there, because they really don’t, most of them don’t comprehend the power of it. And what we try and explain to customers is that this is an amplification of your existing capability, right? So you put your machine vision in, or your models, and it either feeds you data, or automates a function, or creates a cause and effect to enable your staff to do more with less. So really we say, start at what’s out of the box, experiment with it. And then we have our team work with them to try and really drill down into that problem-solving phase for future growth.

Christina Cardoza: Chris, is there anything you wanted to add about challenges that you’re seeing, and how tools and technologies today can address those?

Chris O’Malley: You know, yeah. The big thing, and I think Stephen addressed it already, is especially with the name meldCX, melding the old with the new technology. So computer vision is great. It’s a great technology and it really can, you know, figure out how to deploy the people where you need them most. But what’s exciting about some of the technology that meld offers is, say you don’t want to do the full investment into a new camera setup right now. You may have security cameras already there. You know, meldCX could take those feeds right away, load some models on that, and get basic data just from the get-go. So you don’t have to do this massive investment to start to get data.

Now, what we find is once customers actually start to realize that, and they see what the computer vision can provide, then they’re interested in investing further. And they say, “Wow, you know, instead of just looking at operational-type stuff—is there a liquid on the floor that I need to clean up? As you know, 30 people enter a bathroom and now I need to go clean the bathroom. And that slot machine’s been used 150 times, I want to clean it now.” Or something like that. They start to add it to more and more. They see the power of it, so they start to add more and more and more. And what I find about that is it’s great, because this is, this type of technology, is not coming to an end.

We’re at the beginning right now. You know, I wouldn’t be Intel if I didn’t say Moore’s law, you know, we’re doubling the performance of our technology every two years. The corollary to that is that technology becomes cheaper every single year, too. I mean, technology is reduced by more than half every two years. So you start adding compute to everything. And when you start adding compute to everything, there’s an immense amount of data. So you need technology like this to start making insights, those actionable insights that are valuable for your company.

Christina Cardoza: Now, we’ve been discussing sort of how these physical spaces can transform themselves. Grocery stores, restaurants we’ve mentioned, but it’s really a bigger piece to this, is the online aspect of it. I think sometimes we tend to think of e-commerce and retail, physical retail, as two separate things, but today they’re sort of merging and melding, like you explained. So Stephen, when we talk about a retail omnichannel experience, what does this really mean? And what is it touching?

Stephen Borg: Yeah. And I think it’s really touching on every aspect of your customer’s journey, right? I think there’s been a lot of focus on mobile, a lot of focus on web, but connecting mobile to web in a single, seamless experience has not been something that we’ve seen when it comes to connecting those two to an in-store or a physical contact point. And often we find they’re completely different experiences, right?

So what we’re finding is by using data or connecting those dots—one thing about meldCX, we not only have our base that gives you computer vision, but we have these modules that you can load on existing devices, legacy devices, and new, that allows you to connect those dots.

So for example, connecting that computer vision to an event that occurs locally—it could be providing access to a digital locker based on your token, or having that seamless experience of you’re doing it anonymously, but having your last order come up on the screen. It doesn’t know who you are, but it just knows who your last order—what your last order is. Or when you go to a self-service device and it knows you’ve used it multiple times, it doesn’t go through all the instructions again. It just goes to your last left-off point. So all of these little, subtle things that are done anonymously but create convenience and context is what we’re starting to see. And it seems to be best practice, and we’re seeing some good results from it.

Christina Cardoza: So how would you suggest businesses get the best of those—both worlds, and really connect the dots? How do you start on this omnichannel journey and make sure that you’re providing the right value to each platform, and those platforms are all connected together so that you’re getting even more value into your business?

Stephen Borg: I think we start with, and as Chris was saying, we start with simple measurement—understanding your environment the best you can and trying to connect those contact points.

So we had a recent customer have a scenario that no one anticipated from our data. They’re a large electronics retailer, and they sell CDs and DVDs. And you think that’s a dwindling area, right? Netflix and online streaming. You think that would be something that a retailer wouldn’t give much credence to. But what we’re finding is, especially during holiday season, when people travel, in those travel locations, that they might not have the data that is required, or they might not have the setup in an Airbnb that is required for them to use their own Netflix or their own Hulu. And we’re finding that people will go into these destinations and look for CDs and DVDs. And typically won’t buy them, they’ll go buy something like an Xbox or an Apple mini or gaming. So we found that in this client, although all the online data indicated they weren’t interested in these things unless they had that destination or that base still there, they wouldn’t cross sell to these other items, right? And when they reduced it, you thought there wouldn’t be an impact because it’s not a high-selling area, but when they reduced it, their peripheral sales reduced. And without that data, they would never know that.

So they were relying on online data to dictate what behavior is in-store. And that simple measurement task indicated that if they don’t keep this area, they don’t get peripheral sales, because it’s actually a destination for browsing, especially when going to regional areas. Or those customers are destination shopping as they’re about to go away. And we actually increased the sales in one and it increased peripheral sales. So that type of data, you wouldn’t pick it up in sales data. You wouldn’t pick it up in online data, but you’re picking it up by using that anonymous tracking data, hotspot data, and associated sales data.

Chris O’Malley: If I could interject here, what he references is pretty interesting. So, in the last 10, 15 years, advertisement online has been really eating up a lot of the market share. And a lot of it’s been because you’ve been able to track behaviors. If you showed an advertisement to you, Christina, and you clicked on it, they’d know that it had some sort of an influence and they could pay for that.

When you go in-store, there was none of that information. There was no attribution. There was no success. Did my digital advertisements in-store do anything? But with the technology that meld is offering, or computer vision is offering, you now have that ability to figure out, is my campaign working? Was it actually influencing the people? Were they happy with it? Were they unhappy with it? Were they engaged with it? And you can change that. That’s never really existed before until you have the advent of computer vision.

And that’s pretty powerful, especially for a retailer, because you can now start to monetize some of those things as well, but you can also figure out, how do you change your display? How do you change the technology you’re using? How do you change associate activities? All those different things, because you’re going to pick up all this powerful data, which you never had access to before. You know, I’m in marketing, we always say 50% of the money that we spend is useless, 50% is valuable. We just don’t know which one is which. With the technology that Stephen has, we’re starting to be able to figure that in-store. What’s valuable? What’s not. And then you can really start to target these things that make them a lot better.

Stephen Borg: For example, we have another retailer that’s taken their front-end bay, or the bay that you see when you walk in, and they monitor it and monetize it based on you, the customer, touching the product that’s on the shelf. So instead of paying to be in that bay, now they pay for every click or every touch of individual product that’s in that bay. And then monetizing like a website.

Christina Cardoza: We’re talking about massive amounts of data that we can collect now—custom behavior, how they’re moving, and what they’re doing online—connecting those dots together. Now that we know how to collect all of this data and what we want to be collecting and looking at, how can we make sure that we’re making sense of the data? How do we analyze it and process this data and make sure that it’s accurate to make more informed decisions? Chris, I’ll start with you.

Chris O’Malley: Sure. Yeah. You know, that’s the—it’s the mixed blessing of data. You know, there’s a huge amount of data out there that goes unused, and it’s very valuable. So I think one of the first things that has to do with any retailer, and a lot of legacy retailers in particular have very, like, siloed data. So, the POS data is here and it never shares it; there’s kiosk data in-store here; there’s mobile data over here; there’s online data over here. The data is never shared between them all. That doesn’t do a lot of valuable—we desire personalization. You only know a little snippet about them in each of the separate activities.

When you move to a modern kind of an edge, or a modern microservices-based architecture, where you have kind of a shared data or a data lake, and every single one of these experiences can access that same data, that’s when you can start to make sense of all of that data. The other thing that’s incumbent is you’ve got to make sure that you standardize the data. You know, how do you store the data so that every app that you’re running on top can kind of access the same set of data, understands exactly the importance of that data, and then figure that out?

And then the other thing that frankly starts to happen—and Stephen’s already referenced this a little bit—with big data analytics, and we’re at the advent of that as well, you could start to look at pieces of data that outwardly to us make no sense. There’s no correlation, there’s no relation. But if you see it over and over on big data, you can actually make the correlation and figure out that actually, yes, this product A does influence product B. And you can start to set that up that way. Those are things that you don’t even know about, but it all, it all comes down to how you set your architecture up—make sure that data is shared by all the different apps.

Christina Cardoza: And I imagine this data we talked about coming from cameras, we’re doing online data, we’re watching customers in-store, tracking their movements, their patterns, seeing what attracts them, depending on where products are placed. So I can imagine that you’re not just using one solution, or you can’t just do it alone with one company. Stephen, are you working with partners like Intel? How do you use the ecosystem that’s available out there to make this possible for businesses?

Stephen Borg: Yeah, and I guess we see there’s multiple types of data. So there’s some data that are—that is immediately actionable. So for example, we work with a large hotel chain, and when their room keys are out on their vending or on their kiosks, or there’s something that needs to be filled, we actually push that data through intermediate alert. So they use like a Salesforce communication app with their staff. We will notice things. We don’t necessarily store that data. We’ll just have an event that we executed that command. So sometimes we don’t store the data; it’s immediately actionable.

Or, as Chris mentioned earlier, we feel that front desk has hit a threshold and it needs to be cleaned, right? So there’s that type of data. And there’s also that historical data or multisource data that you were trying to get insights out of. In that case, yeah, we do. We work with Intel from an OpenVINO perspective and to make sure that our models are optimized, they can coexist with other applications. And also that we’re not—one thing that we found with OpenVINO in particular, it means we need less heavy infrastructure at the edge, which significantly reduces cost. So that’s a great aspect.

And we work with partners such as Microsoft, Google, Snowflake, to provide customers the data set in the way they wish to consume them. In addition, we have a very comprehensive—and this is one of the things that we initially struggled with—we were providing the data, and customers did not have either the resources or the understanding of how to mine that data effectively.

So we have a comprehensive suite of dashboards that you can use depending on your role in retail. So if you are operational, you can use the operational dashboards; or if you’re marketing or product, you’ll use those bot dashboards. In addition, you can feed your existing data lake or data warehouse. So what we’re finding is customers have a hybrid. They use our reports, which are customizable, and they’ll feed their main data source and start to do integrating into their reporting system.

So one of the aspects that we found, or one of the blockers that we found, is that we didn’t want customers to need to make a big change to their data warehouse or data lake just to experiment with the technology, which we found that being a blocker because they’re really resource poor. So you sign on, pulling your persona, and I’ll give you the data that’s relevant to your role.

Chris O’Malley: So, one thing I’d like to—you know, Stephen referenced the importance of real-time actionable insights. So, one of the things we’re seeing is that if you go to the cloud, you’re going to encounter some element of latency. And for some things that’s perfectly fine, latency doesn’t matter. And those things are perfectly fine to store in the cloud or put into the cloud. But a lot of activities that happen at a retail store, you may want to have absolute real time and you need to do it at the edge.

And the other thing that’s happening—we already referenced that compute is getting so cheap that they’re adding more and more compute. So there’s smart building technology, there’s IoT sensor data all around these tools, there’s computer vision technology. There’s lots of things like that. So your data is actually growing significantly faster than the cost of your connectivity to the cloud is reducing. So you really can’t—in theory you can run all of this video data, computer vision, you can run it in the cloud. The problem is your cost of connecting to the cloud to make those insights is going to explode. And it’s going exceed the value. You have to do this type of stuff at the edge. And that’s where Intel with OpenVINO is very much optimized for efficiently optimized or efficiently using the edge capability to do the inference and get those real time analytics that you need. That’s where we’ve been focused. And that’s where we see really important—

I think some of the data we’re seeing is that data is going so large that about 95%-plus of the data is actually going to be dispensed with and disposed of at the edge. And only a certain amount, let’s say 5% or less, is ultimately going to go to the cloud for permanent storage, for analytics and things like that. But it’s key metadata. The rest is going be processed at the edge. So the edge is absolutely growing rapidly.

Stephen Borg: And that’s what we’re finding. We don’t send any video to the cloud, so we strip out everything we need at the edge. And we do that for two reasons. One, to reduce the cost. And, most importantly, we do it because we abstract all content at the edge for privacy reasons. So that way there’s no instance of any private data going into the cloud or going through our system. It’s all stripped out by that edge device and OpenVINO.

Chris O’Malley: Yeah. And with GDPR, that is really important. So any type of anonymous analytics, anything like that, has to be deleted at the edge. So you can just—you just gather the data that’s important, but no images, no nothing, is ever sent to the cloud. It’s not allowed to be sent to the cloud.

Christina Cardoza: Since you’ve both brought up cost being an aspect of this, Chris, I want to come back to something you mentioned earlier, which is you have to look at your infrastructure and change things and consider the legacy technology that you do have. But I know some businesses can be worried about introducing new technologies to their infrastructure, whether or not it’s going to be a smart investment in the long run. So how can they ensure that the technologies that they’re using, the infrastructure that they’re changing, is going to meet their needs today, but also be able to scale to meet their needs tomorrow?

Chris O’Malley: Got it. So, yeah, I mean, from an architecture standpoint, I mean the first thing that you—you have to be future proof. So whatever you do, it has to be future proof. And that’s why we think that you need kind of an open—what we call a microservices-based architecture. The siloed architecture, which worked well in the past, it fails as you continue to add new technologies and new technologies and integrate new data; it becomes so difficult. The cost of integration is just going to overwhelm you.

If you build your modern—and you could start it, by the way, from your online, and then you can add your mobile, and then you build it down. Most people go down until, like, either the restaurant or the retail level. The last thing to be integrated into that modern architecture is probably the point of sale. You know, that’s kind of one of these sacrosanct things, but when you build everything else, eventually the POS can be sort of an app right on that as well, too. And then, because you’ve got the data infrastructure set up, everything like that, as soon as I add a new technology, it’s much easier just to drop it in. It almost becomes like a new app placed on top of an existing infrastructure. And it’s very easy to launch those new apps, and you can really get to market a lot quicker than if you had to integrate in the old ways with the silo technology.

Christina Cardoza: And we’ve been talking a lot about the business benefits that these organizations are getting by introducing these technologies and creating these omnichannel customer experiences. But Stephen, I’m wondering, how are the customers dealing with all of these changes? What are the benefits that they’re getting from digital signage or video analytics?

Stephen Borg: What we’re finding is that if it’s done with privacy in mind, that customers respond to it quite well, in that either they’ve had a less—a frictionless experience; they’re getting through the checkout quicker, or the staff member has information that’s relevant to them at the time, or maybe tailored. So they’re getting content that’s tailored to either their persona type or based on their frequency of visits. All this can still be done anonymously, but it can create context or awareness. So we’re finding that if you’re providing a frictionless experience, that staff member is not just focused on the transaction. And we found this—we do some financial institutions as well, where we found those staff members could have more of a conversation rather than focusing on some of the transactional aspects—that increasing customer engagement is welcomed, but they still want this degree of knowing that it’s a clean and safe environment where it’s contact—physically contactless where possible, but there is still a rich need for some engagement. And that’s what we’ve found. One of the aspects we’ve found from the pandemic is that now some shopping has become even more social because some countries have just still have lockdown restrictions, and when they do get out they want to be engaged.

Christina Cardoza: So, unfortunately we’re nearing the end of our time today, but Chris, I wanted to give you a chance to add any final thoughts or key takeaways to our listeners today as they go on this omnichannel customer-experience journey, and continue to refine it in the years to come.

Chris O’Malley: Got it. You know, I think the critical thing that we’ve mentioned already is that customers want this frictionless experience. They want the personalized experience, that they can get someone online—they want that in-store, but they also still like the socialization in-store. You know, that type of stuff is still very important, especially in today’s environment. And it can be done with this technology. It can absolutely be done.

You can have the great parts of, like, shopping that everybody still loves, but you can bring in that goodness of online through all of these tools. But from a retailer or a casino venue or hospitality venue, you also have this ability to replace human resources in some instances. There is the worldwide labor shortage that we’ve referenced is real. These venues are struggling to hire people. They’re desperate to hire people. I’ve—many restaurants, they can’t fill up all of their seats because they don’t have enough staff. The same thing is happening in hospitality and entertainment venues. If you can take some of those things that are, perhaps were, done by humans or still could be done by humans, if you can automate that, if you can replace that with compute, then you can serve, you can hold back your valuable human resources to do the stuff that people really like, which is the interaction. It’s the talking, it’s the setting up your experience.

That’s what you really need to do. Focus your human resources or your human talent on that interaction that people really like to really drive experiences, and all of that stuff in the background, all the operational, all the inventory, all the insights and stuff, set that up with computer vision and automation that excels at it. That’s what it’s really good at.

Christina Cardoza: Yeah. That’s a great point. You know, as an end user to some of these things—grocery stores, checking into hotels, ordering online food to go—I’m already seeing such a huge benefit with the inflammation of these technologies. And I can’t wait to only see it advance in the years to come. So with that, I just want to thank you both for joining the podcast today.

Chris O’Malley: Alrighty. Thank you very much. Have a good day. We’ll see you, Stephen.

Christina Cardoza: And thanks to our listeners for joining us today. If you liked this episode, please like, subscribe, rate, review, all of the above on your favorite streaming platform. And, until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Omnichannel Shopping Twist: Bringing Online In-Store

Sometimes you just want to walk into a pharmacy, grab what you need and leave. That’s not always possible when you have to rummage through shelves or stand in long lines at the counter. But thanks to digital innovation, a trip to the pharmacy has become a more satisfying, novel experience.

That’s just the case at CarePlus Pharmacies in Ireland, where a new digital retail solution combines interactive screens and backroom robotics to enhance the shopping experience while helping stores grow sales, minimize shrinkage, and optimize staff.

The solution, developed by ScreenVend, a global software and technology company, offers an omnichannel twist. It brings the online shopping model to physical stores. Instead of searching for products on the shelves, shoppers find what they need on interactive touchscreens. A retail robotics system picks the items and dispenses them at the POS station.

“What you would normally do at home, you can now do in-store,” says Simon Healy, chairman of ScreenVend. “Customers can use technology in an immersive way and have the benefit of instant fulfillment. The company calls this hybrid approach of combining digital with physical shopping ‘ClicksiNBricks’.”

Digital Retail Delivers Personalized Service

Unlike many technology solutions, ScreenVend was the brainchild of non-tech people. Healy, a retail veteran, saw a specific need in pharmacies to offer customers the speed and convenience of online shopping while freeing up pharmacists and associates to provide personalized services. He studied the pharmacy ecosystem to figure out how to improve it. Healy noticed they are highly transactional environments, but at times customers need individual assistance.

Retailers were using digital screens in-store, but mostly for signage. Then it occurred to him to replace shelves with interactive displays in conjunction with instant robotic delivery instore. The displays don’t replace staff, but rather make pharmacists and associates more available to help customers. “What pharmacists all over the world are very good at is patient interaction and positioning themselves in the community as professional health advisors,” Healy says.

As doctors have gotten busier over the years, people rely more on pharmacists. “In our particular business, what we want to do is make greater use of consultation rooms to assure that pharmacist’s knowledge or the technician’s knowledge is used to its best.”

ScreenVend displays don’t replace staff, but rather make #pharmacists and associates more available to help #customers. ScreenVend via @insightdottech

Interactive Digital Displays Inform and Upsell

Now, when shoppers walk into a ScreenVend-equipped pharmacy, they come face to face with the digital displays. For the uninitiated, help is available from an associate. Otherwise, shoppers head to the screens to make their purchases with taps and swipes (Video 1).

https://vimeo.com/712065431/980481db17

Video 1. A ScreenVend-equipped pharmacy in action. (Source: ScreenVend)

They fill their virtual shopping carts and then tap a prompt to complete the transaction. A paper slip with a QR code comes out of a slot by the digital display. The QR code instructs the POS system to complete the order, which is pulled and fulfilled behind the scenes by a robot with capacity for 25,000 boxes.

As shoppers pay for their purchases, the items are automatically dispensed through a conveyance at the POS. “We managed to create a complete new retail experience for the customer and for our pharmacists,” says Healy.

In the process of improving the shopping experience, the solution also helps pharmacies boost sales and reduce shrinkage. “Within the software itself, there are numerous upselling and cross-selling opportunities. The customer can see what products are complementary, and that is very important.”

The unique use of robot technology helps address shrinkage, a nagging problem in retail caused by theft and errors. In addition, centralized software management cuts down on errors when setting and updating prices.

ScreenVend can also add an element of theater. In store environments where the robots are visible behind glass, customers can enjoy watching their orders fulfilled. The solution makes it possible to operate in a smaller footprint than in a traditional store, but it can scale. More screens can be added as traffic increases in a store, Healy says.

Digital Retail Delivers New Opportunities

Although ScreenVend was developed for pharmacies, it is suited to other retail environments. For example, at shopping centers, the solution can enable popup stores. “With the flick of a button, the software can turn a gadget store in the morning to a bookstore in the afternoon,” Healy says. “ScreenVend would also work in small spaces within box stores in a store-within-a-store approach.”

ScreenVend platform uses the Intel® NUC, a rugged PC with a small footprint to power the digital displays and Intel® processors for the dispensing robots, touchscreens, and POS stations, and tablets.

The relationship with Intel is key. “We’re effectively a technology startup and I think we’ve been very well treated by Intel in a very personalized way,” says Healy.

That level of attention should prove valuable as ScreenVend expands into other areas of retail. As part of its pitch to retailers, ScreenVend will offer services around the technical aspects of implementing and running the retail robotics.

“We will also offer design consultation and customization of digital displays to ensure that each brand has a unique look and feel,” Healy says.

He hopes retailers will grasp the value of the solution, which lets customers still have that brand relationship through physical in-store experiences while using all the digital tools that are available online. That’s certainly a goal of ScreenVend, and that should be the goal of many retailers that act in the online and retail space.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Self-Service Kiosks Transform Digital Banking

Are we headed for a cashless society? The question has fueled debate for years but the majority of consumers and businesses don’t think it’s in the cards. M3 Technology Solutions (M3t), a turn-key kiosk and management system solutions provider, isn’t taking sides. Its self-service multifunction kiosks let consumers decide for themselves.

Think of M3t kiosks as ATMs on steroids. They provide basic ATM withdrawal and deposit functions, and add digital banking services such as bill breaking, loading cash into digital wallets, converting cashless payments to cash, and short-term credit lines.

“A rich set of functionality makes the technology appealing to users,” says M3t Chief Operating Officer Dylan Waddle. “It gives the consumer a significant amount of flexibility, if you will, in what they’re doing with their currency.”

The financial kiosks have been popping up in places like convenience stores, gas stations and casinos. They are especially popular with retailers that run unattended convenience stores. “People can put cash in, either load a prepaid card, or get a QR code to go buy merchandise, come back and get their change,” says Waddle.

Banks, of course are also interested. They’ve been placing kiosks in lobbies and other locations and may eventually replace their ATMs. “Banks are looking at self-service technology in a bigger way,” says Waddle. “Larger financial institutions like JP Morgan are starting to push more and more functionality to the self-service kiosk as quickly as they can.”

Whether you’re a bank, casino, or retailer, a self-service kiosk is a tidy way to offer a digital experience to consumers looking for convenience, flexibility and speed. The technology also helps businesses optimize staff utilization and improve cash management.

Digital Banking Services Drive New Opportunities

M3t got into the kiosk business a decade ago because it saw an opportunity in the trend toward cashless transactions. “Our core focus from inception was on back-office management systems, specifically businesses that manage a lot of physical currency across multiple sites,” Waddle says.

Rather than outsourcing manufacturing, the company decided to set up is own factories in the U.S. “We determined that in order to be successful we were going to have to become fully vertically integrated, so we started manufacturing kiosks.”

So now, when businesses buy m3t’s backroom management platform, they can get the kiosks and infrastructure that goes with it. The platform allows businesses to automate their cash management. With its software, businesses can track the movement of cash at every step. A bank, for instance, can track cash as it flows in an out of the vault, cash dispensers and tellers.

The M3t kiosk gives the consumer a significant amount of flexibility in what they’re doing with their #currency. M3 Technology Solutions via @insightdottech

Self-Service Kiosks Everywhere

Waddle believes the self-service banking kiosks will become ubiquitous in the near future. For example, M3t is looking at deploying systems at entertainment venues and universities. “We’re getting ready to put kiosks in Busch Stadium (St. Louis Cardinals), American Family Field (Milwaukee Brewers), and Notre Dame.”

Outdoor spaces are a major focus. Until now kiosks have predominantly resided inside, but pretty much anywhere people need to access cash, whether in legal tender or digital form, is a good place for kiosks. And their appeal to consumers is sure to increase as more functionality is added.

Already, consumers can have a range of interactions with the machines. For instance, a kiosk will break a $100 or $50 bill into smaller denominations. Through a service called UltraCash, users can set up a six-day line of credit tapping their bank funds. Another feature, called UltraCard, lets users move cash to their prepaid open loop cards such as Mastercard, which can be used anywhere Mastercard is accepted.

“That makes us pretty unique because you can have more access to cash from our terminals than at a standard ATM,” Waddle says.

For consumers who still prefer cash, the kiosks lets them use even cashless stores. “They come in and put cash into our kiosk. They get a QR code or they get the funds loaded to their phone. They go get what they want, come back, and get their change at the kiosk—and their receipt if they want a physical receipt,” Waddle says.

Secure Systems with Digital Technologies and Rugged Designs

With all the functionality they deliver, M3t’s self-service kiosks store a lot more cash than a typical ATM. “Because of that, they’re typically made of 12-gauge steel—heavy-duty, hardened solutions that get bolted to the ground, bolted to the wall. It may not even come out if you tried to drag it out with a car,” says Waddle.

Another security layer involves sirens and alarms that go off at the terminal while sending alerts through the cloud to a management console. Administrators can also access the management software from a mobile app. And the technology is compliant with PCI (Payment Card Industry) standards and follows cloud infrastructure security protocols.

To further enhance security and convenience, Waddle says, M3t is working with Intel® to add biometric recognition. Eventually, accountholders won’t need a card or phone to use a kiosk.

The new functionality will expand M3t’s relationship with Intel, which already provides high-performance processors and other technologies for the kiosks. “The overall relationship with Intel has been extremely strong and we continue to evolve what we’re doing with them,” Waddle says.

Intel will remain a key partner as M3t executes its ambitious vision for self-service kiosks, providing cash and cashless options to consumers wherever they may be. “I feel like you’re not just going to see a kiosk in every location, you’re going to see one in every single room, says Waddle. Companies see them as core providers that are supplementing employees in a really big way.”

For more on banking kiosks, listen to Self-Service Tech Trends in Retail, Banking, and Hospitality.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

What It Means to Be an AI Developer in 2022 and Beyond

Picture fast-food restaurants being able to tailor the kinds of food they keep on hand depending on which cars flow through the drive-thru line. Picture advanced cameras being able to detect problematic porosity in finished car parts. Or radiologists who work with a virtual assistant to sift through X-rays and surface troublesome ones for a second look.

The Importance of AI Development Today

These operational improvements all drive on artificial intelligence (AI), which is seeing an explosion in use cases in practically every industry. Where there is data, there are opportunities for efficiency—and even more opportunities for AI.

A confluence of technology shifts—the growth of computing power and the development of better communication infrastructures like the 5G network—is fueling this AI revolution.

But while AI transformation might be firmly underway, a shortage of AI developer talent might hinder its execution at scale.

“If AI is applicable to every industry, then that means we are going to need a whole lot more developers to acquire AI skills quickly,” says Bill Pearson, VP of the Network and Edge Group, and General Manager of Developer Enabling for the Internet of Things Group at Intel®.

A recent survey on the state of AI in the enterprise found that AI developers and engineers were among the top talent companies needed most. The pattern was uniform across all enterprises, whether they were seasoned, skilled, or just starting out with AI deployments. But unfortunately, much AI knowledge remains the purview of a small set of developers, according to Pearson.

This gap in AI talent might be because developers still have a few barriers to overcome before they can take up jobs in the field.

Top AI Developer Challenges and How to Overcome Them

1. Limited AI Knowledge

To create AI models, developers must first understand what AI is and what it can do. But all too often, existing documentation is geared toward experienced professionals, leaving out beginners. To infuse new AI developer talent into the pipeline quickly, the playing field needs to be leveled fast.

“Developers will be right at the center of #AI, pushing the envelope with #technology and creating some very interesting solutions that will probably blow all our minds.” – @billpearson, @intel via @insightdottech

As developers get started on an AI learning journey, they will need better documentation, hands-on training, and tools that are easy to use.

“In the past, AI has been the domain of experts. We really need to make materials more accessible, we need to democratize AI,” Pearson says. “We have to make it easy for developers to find the right materials at the right time, to make it easier for them to access the information they are looking for.”

For instance, Intel offers a variety of AI training videos and documentation for those interested in answers to specific questions about AI development. These materials are tailored to various levels of AI expertise so beginners can start exploring, while advanced developers can find answers for more detailed use cases of their applications. Intel also offers a list of prerequisites to help developers get started on their journey.

Developed by developers for developers, the Intel Edge AI certification course teaches core AI concepts and how to apply different use cases at their own pace. It features free tools and code samples, open-source resources, and a library of pre-trained AI models. Developers can study the code in these models to see how it can apply to their own work.

2. Too Many Choices

But with all the AI tools and resources available to get started, it can be very easy for developers to get overwhelmed. It is not always clear to them which tool is going to be the right one for the job. And then there are concerns whether the tool will be a solid long-term investment. Developers have to figure out what is going to be the right hardware, software, AI models, and algorithms to serve all their needs over time.

Part of democratizing AI and improving access to necessary tools is making them part of developers’ everyday workflows. “We’ve got to be developer-first in how we deliver these tools and do so with open and flexible platforms,” Pearson says.

Developers should pay particular attention to interoperability and open ecosystems that support the various tools they’ve already come to know and love.

For instance, the Intel® Distribution of OpenVINO Toolkit supports other popular AI frameworks such as TensorFlow, Caffe, PyTorch, and ONNX—so developers don’t feel locked into one choice.

In addition, the AI toolkit is easy enough for beginner developers to get started, but advanced enough to help developers scale their efforts and boost their AI skills. “It’s a toolkit that helps developers deliver faster and more accurate results using AI and computer vision inference,” says Pearson.

Since OpenVINO is open sourced, it has a strong developer community around it—enabling developers to get involved, take part in improving the platform, and leverage some other improvements that the community has made.

“That’s a great way for developers to decide not only what the best tools and frameworks are but participate in creating some of the best tools and frameworks in the industry,” says Pearson.

3. Building AI Models

Once developers have obtained the tools and resources necessary to get started on this journey, they face challenges related to developing and deploying AI models. For instance, do they have the right data to start building the model? Is it in a useful state or format? How are they going to apply it to their use case?

“Getting data to the right point where you can actually do something useful with it is one of the big challenges we have,” Pearson says. The origin of data sets and the how’s of using them are additional concerns for data scientists who need to ensure AI models are scrubbed clean of bias.

In keeping with the promise to a developer-first approach, under the hood of OpenVINO is Model Zoo, a set of pre-trained models that developers can use. The set includes examples that are developed using industry-standard frameworks like TensorFlow, Pytorch, MXNet, and PaddlePaddle. Basing AI code in frameworks that developers already might be using checks off the developer-first approach so they don’t need to move off their workflow to benefit, Pearson explains.

Data security is another concern that needs to be addressed for AI models that work in the cloud and at the edge. Developers and data scientists need to verify data sources and ensure ethical AI model development. “It’s not just about the application but the people and processes used to come up with the data, the algorithms. All that is part of building an ethical and equitable AI solution,” Pearson says. The OpenVINO toolkit offers an additional layer of data security with a data protection add-on.

“When you use the security add-on, it just gives you a way to have secure packaging for those models and then to execute them in a secure way. We’re able to have users who have appropriate access to the models, they’re running it within some assigned limits, and they can even run these in KVM-based virtual machines,” Pearson explains.

4. The Edge-Cloud Dilemma

Then there is the question on where they will store, process, and analyze their data. According to Pearson, AI is a different ballgame from traditional software development, which is naturally leading to changes in how developers work.

Traditionally, most IoT devices have been proprietary, embedded fixed-function devices. But recently with the increased adoption and maturity of the cloud and cloud-native technologies, containers and orchestration have become more pervasive. As a result, developers are moving toward a software-defined, high compute development environment leveraging cloud-native technologies for IoT and AI development.

AI in the cloud is being used for heavy-duty crunching of cloud-based machine learning models, while the edge provides new opportunities to analyze AI models at the data source.

“If you’re an embedded developer building solutions in the past and now, all of a sudden, you’re trying to figure out how to capture and make sense of data using AI at the edge, that’s an entirely new paradigm,” Pearson points out. Cloud-native development is changing the landscape for developers, who have to understand the AI use cases for both cloud and edge and build models accordingly.

It’s all about understanding your business goals and objectives, according to Pearson. “Depending on the KPIs that developers have, and what they’re trying to achieve, we can help determine where the best place is to run their AI workloads,” he says.

Cloud computing offers advantages in terms of cost and scale. If what the business is trying to achieve doesn’t require secure data on-site or low latency, then cloud might be the right way to go. If there are concerns with bandwidth, security, and scale, then developers might want to consider the edge.

“I get to choose as a developer which location makes the most sense to do which task. And again, I get to scale the compute resources that I need from virtually infinite in the cloud, to perhaps much more limited, whether it’s power or performance at the edge, and I can still get the AI that I need to achieve the business goals that I’m trying to reach,” Pearson explains.

With scalable cloud-native development, workloads can easily extend to where intelligence is needed from edge to cloud.

5. The IT/OT Integration

The very nature of AI’s utility—an integration between IT and OT—presents another challenge. Developers need to figure out how operational insights at the edge can be integrated into business operations to deliver efficiencies.

Developers also have to work backward from the KPI to be fine-tuned and then figure out the right combination of hardware and software that will do the job. Depending on the KPI, teams might need different performance and power choices. “Developers have to ask, ‘What’s the right hardware to run my application that’s going to give me the results I need,’” Pearson says.

Assuming AI developers can access the know-how and get software development going, they still need to test-drive software on a variety of different hardware units. This process is not easily achievable, nor is it the most time-efficient way to get the job done. The ongoing global chip shortage compounds the problem, making hardware that uses these chips difficult to source and buy.

Intel’s DevCloud solves one of the biggest challenges for AI developers. It eliminates decision paralysis by enabling developers to test their AI solutions on hundreds of edge hardware devices.

“Developers can very quickly understand how their application is going to perform using each of these pieces of hardware and they can find out what’s going to be right for their particular solution,” Pearson says.

The latest version of the toolkit, OpenVINO™ 2022.1 also helps in this space with its new hardware auto-discovery and automatic optimization designed to make hardware combination testing a breeze.

Usually, AI development is complex because the software has to be customized for each and every end use case. In addition, the edge hardware to be used increases the number of permutations and combinations that need to be tested. The OpenVINO toolkit eliminates those complexities, Pearson says. “There’s no ‘I have to run this differently because there’s an FPGA (field programmable gate array) involved’ or ‘To take advantage of one particular hardware feature, I may have to use some different code.’”

A developer-first approach shows up in the cross-architecture, write-once-use-anywhere toolkit. “You can easily optimize, tune, and run your inference application with our model optimizer,” Pearson says. Even better, a developer who doesn’t understand the difference between a GPU and a CPU can make this work.

6. Scaling AI Efforts

Once developers get started, what comes next? The path forward is not always clear.

Intel offers advanced developers with reference implementations from the Intel® Developer Catalog, which is a set of market-proven vertical implementations of software. A developer who is looking to implement a defect detection AI system or intelligent traffic management, for example, can use examples from the catalog. “You can see all the code, we’ll walk you through the implementation, and you can very quickly understand what’s going on there,” Pearson says.

AI development is not just about the software and the hardware, it is also about the environment in which it is deployed. An additional tool, Intel® Smart Edge Open, helps developers understand how to make AI applications part of an infrastructure that can be deployed in an environment. “It’s important for developers to test the AI application they are building, in the context of a brownfield or other environment,” Pearson says.

Just a few years ago, the thought of developers being able to access and make sense of data at the edge seemed like a pipe dream. But all that is changing. “The role of the [AI] developer is becoming more important than ever,” Pearson says. “We’ve got to make sure that they’re equipped to deal with this new environment through tools, products, and the information that’s going to help them build their solutions at scale.”

Preparing for the Future of AI Development

This is just the beginning of an era. As compute power and AI adoption increases, the use cases are going to expand to places that we never even imagined would be possible, Pearson explains, adding, “Developers will be right at the center of that, pushing the envelope with technology and creating some very interesting solutions that will probably blow all our minds.”

For developers, understanding the AI problems they’re trying to solve and equipping themselves with the skills to solve them is key to success.

“The future—today and tomorrow—is about open architectures and open ecosystems with much more flexibility, interoperability, and scalability that developers are going to need,” Pearson says. “AI is an opportunity for developers to be able to go and embrace a new world and do some exciting new things.”

AI is going to be the way of the future. And developers of all stripes can be a part of this exciting revolution by upskilling and using tools that streamline their workflows and unleash creativity.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Mobile Service Robots Built on Performance and Efficiency

In the classic TV series The Jetsons, the title characters employ a robotic housekeeper called “Rosey” who also serves as an impromptu home security system and companion. While The Jetsons takes place in a future that’s still four decades from now, forward-thinking technologies like Rosey are already becoming a reality.

A great example is RoomieBot, a mobile service robot that uses AI at the edge, machine vision, and natural language processing (NLP) to autonomously navigate and interact with humans in healthcare, retail, and hospitality settings. And it’s not a stretch to imagine these robotic AIs puttering around as home assistants in the next few years.

The future of ubiquitous mobile service robots requires modular hardware building blocks that blend performance, efficiency, and advanced software support.

The Anatomy of Modern Mobile Service Robots

Determining the best way to deliver these features begins with understanding the current state-of-the-art technology—both its advantages and limitations.

RoomieBot is designed around Intel® RealSense cameras, Intel® Movidius VPUs, and Intel® NUC platforms. This hardware suite provides a great foundation for early-stage mobile service robots with the vision and compute functionality required for:

  • Simultaneous Localization and Mapping (SLAM) to navigate autonomously
  • Visual detection algorithms to recognize people and objects
  • NLP for the voice user interface
  • Functions that control embedded motors and actuators

But as organizations look to scale the production of these systems for mass market deployment, there are opportunities to upgrade the stack for improved performance per watt and streamlined integration.

Most notably, these can be achieved by adopting 12th generation Intel® Core processors, formerly known as “Alder Lake”.

The future of ubiquitous mobile service #robots requires modular #hardware building blocks that blend performance, efficiency, and advanced software support. @Advantech_USA via @insightdottech

High-Performance Processors Don’t Have to Break the (Power) Bank

These latest Intel Core processors deliver a significant performance improvement over 8th generation Intel NUCs.

The performance gains are the result of eight additional cores (12 total) on the new processors. But these aren’t just any cores. 12th gen Intel Core processors are the first to introduce a hybrid core architecture consisting of traditional Performance CPUs and a new class of Efficient cores. The Efficient cores are optimized for less computationally intensive workloads like system management and control tasks.

All that added horsepower comes at a minimal power tradeoff, as the Intel® Core i7-12700TE processor features a base TDP of just 35W compared to the 28W TDP of the 8th generation mobile processor examined previously. For mobile service robots, this facilitates the execution of sophisticated edge AI stacks without instantly draining onboard batteries.

Smarter Integration Out of the Box

The ability to seamlessly integrate 12th gen Intel Core processors into a variety of different mobile service robot architectures is another crucial consideration for mass production and deployment.

For example, the MIO-4370 from Advantech, a leader in embedded and automation solutions, supports 35W 12th gen Intel Core Desktop processors with up to 16 hybrid cores and 24 execution threads. Designed to the 4” EPIC size 165 x 115mm (4.53” x 6.5”), the small form factor single-board computer provides OEMs and system integrators a rugged edge intelligence module with all the I/O needed by modern mobile service robots, such as:

  • A variety of high-bandwidth I/O and serial ports that facilitate the integration of vision inputs, perception sensor suites, control signaling, programming, and debug
  • Support for 3x simultaneous interactive displays at up to 5K resolution
  • Networking and expansion that includes two 2.5 GbE interfaces with Time-Sensitive Networking (TSN) and Intel vPro® support
  • 3 M.2 expansion sockets including 2 M.2 2280 PCIe 4.0 and PCIe 5.0, supports use of high-speed NVMe storage along with video transcoding, capture, or xPU acceleration cards
  • Additional components like a smart fan, discrete TPM 2.0 for security, and audio subsystem for voice communication

Because IoT edge use cases like mobile service robots consist of so many disparate applications and functions, the Advantech SBC has been pre-certified to work with Canonical’s distribution of Ubuntu Linux that enables containerized application development. Each container comes with its own system image, so mobile service robot programs can be coded free of dependencies or worries about other system requirements. This reduces development time and complexity, but also potentially speeds compliance efforts since changes to each container can often be certified individually once the whole system is approved.

Integration is further simplified by tools like the Advantech iManager 3.0 that offers APIs for controlling I/O from the user OS. Advantech’s Edge AI Suite and WISE-DeviceOn goes even further, providing a user-friendly SDK based on the Intel® OpenVINO Toolkit that lets engineers optimize and deploy deep-learning models to targets like 12th Gen Intel Core processors.

Mobile Service Robots: Out of the Factory and into the Family

In all, platforms like the MIO-4370 are more than just intelligent robotic controllers. They are building blocks for advanced mobile service robots that are higher performance, lower power, faster to develop, and more cost-effective than ever before.

Put simply, these integrated solutions are a precursor to scaling advanced mobile service robots for mass production. And subsequently, a future where having your own Rosey isn’t just reserved for a select few.

Thanks to highly integrated development environments, that future is closer than you might think.

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

IoMT Automates Vital Signs Technology

No one likes to schedule a medical appointment only to find an endless wait at a crowded doctor’s office or clinic. But with a critical lack of healthcare workers, those waits aren’t getting any shorter. The good news is IoMT (Internet of Medical Things) technology helps take the pressure off overburdened staff. AI-enabled self-service kiosks can deliver a better patient experience—both in and out of the clinical setting.

Shortage of medical professionals may be new in some parts of the world, but it’s a familiar problem in other markets.

“Asian hospitals have faced staffing issues for a long time,” says Jason Miao, Overseas Business Sales Director for Imedtac Co., LTD, a provider of IoMT technology solutions. “Here in Taiwan, it’s not uncommon for a doctor in a public hospital to see 100 patients in a three-hour shift.”

When you practice medicine at that kind of scale, one important truth becomes apparent: Anything that optimizes workflows in hospitals and clinics is a win.

As Imedtac’s Business Development Manager Beren Hsieh puts it: “It might not seem like a dramatic change, but if you can use technology to improve a process by a few minutes per patient, it has a huge impact on wait times and provider availability.”

“If you can use #technology to improve a process by a few minutes per #patient, it has a huge impact on wait times and provider availability.” – Beren Hsieh, Imedtac Co., LTD via @insightdottech

AI in Healthcare Makes It Easier to Measure Vital Signs

Case in point: Imedtac’s Smart Vital Signs Station, an IoMT alternative to the traditional vital signs measurement workflow.

The vital signs monitoring system is a self-service kiosk that measures a patient’s height, weight, temperature, heart rate, and blood pressure. If desired, it can be configured to record additional vital signs, such as blood oxygen levels.

The traditional method of measuring a patient’s vitals requires a trained individual to take readings, with varying devices, and manually record the results. On the other hand, Imedtac kiosk is a one-stop, self-service, automated solution that can save valuable time and resources.

Patients begin by identifying themselves to the system, which is connected to the hospital’s health information system. The station then measures other relevant health data their height, weight, temperature, and so on, providing guidance as needed through a simple-to-use interface. It automatically uploads the results to the cloud so that the patient data can be securely integrated with the patient’s electronic medical record and personal health record.

The entire process takes just a few minutes. Crucially, healthcare providers don’t need to be involved at all—freeing them up to perform other duties, and preventing errors caused by the manual transcription of vitals data (Video 1).

Video 1. IoMT powers vital signs technology workflow. (Source: Imedtac)

IoMT Devices Support Flexibility and Stability in Rural Thailand

Solutions like Imedtac’s need to operate in a wide variety of settings, from hospitals, clinics, and healthcare organizations to neighborhood pharmacies, gyms, and even grocery stores. Understandably, there isn’t always a lot of support or oversight available. For this reason, they are designed for flexibility, stability, and ease-of-use. The company’s experience in northern Thailand is a good example.

Imedtac partnered with Overbrook Hospital in Chiang Rai, a small city in a rural part of the country that serves as a medical hub for the surrounding communities. It was a challenging deployment. Overbrook is a busy hospital, where doctors and nurses are stretched thin, and IT resources are not as readily available as in larger urban centers. The hospital’s patient base presented an additional issue because it included many elderly patients as well as people unaccustomed to using technology in their daily lives.

Imedtac worked with Overbrook’s administrators to develop an optimized patient intake workflow—one tailored to the hospital’s needs. To cope with limited English proficiency in the region, they added a Thai language user interface. And to simplify the authentication process, they included support for Thai national ID cards. Imedtac’s developers then integrated the kiosks with the hospital’s legacy IT systems.

The results were better than expected. As it turned out, the majority of patients caught on to the vital signs stations very quickly, and were able to use them without difficulty. The hospital’s nurses, who no longer had to measure each patient individually, were free to assist those who required extra help.

The deployment in Overbrook has also proven to be very reliable—important in any hospital setting, of course, but especially in one where IT resources are limited. Here Miao credits Intel® processors: “These vital signs kiosks run 365 days a year, pretty much nonstop, so they have to be built on something dependable. Intel provides an extremely stable and powerful platform for IoMT applications.”

The Future of Patient Care

IoMT technology is already providing some much-needed relief to healthcare professionals. Going forward, it may be able to directly improve health outcomes for patients as well. “Near term, we’re already starting to see solutions like smart wards, which use edge AI and real-time analytics to optimize inpatient workflows and improve medication safety,” says Hsieh.

Further ahead, healthcare administrators and systems integrators will turn to edge analytics and AI to enhance remote patient monitoring, critical care, and surgical medicine.

“In the future, this technology will be used to integrate data streams for ICU and OR staffs, giving them the information they need when they need it,” says Maio. “And doctors will rely on AI to help them make better decisions about patient care.”

Many challenges remain for the healthcare industry, both now and in the coming years. But thanks to advances in IoMT technology, the prognosis is improving.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

This article was originally published on May 19, 2022.