The Future of Access Control with IoT Security

Protection from internal and external security threats can feel like a never-ending challenge. The complexities of deploying safety measures to prevent unauthorized access across dynamic environments with a diversity of platforms can be costly and labor-intensive.

As a result, organizations look to install stronger access control that can seamlessly integrate with existing systems and processes, and support greater operational efficiency. And deploying IoT security solutions make it possible.

One example is Vienna International Airport—the biggest aviation hub for travel between Central and Eastern Europe—serving up to 31 million passengers annually. Like most airports, Vienna International goes through continual expansion, construction, and maintenance. And with approximately 80,000 employees, subcontractors, and annual visitors, secure and centralized access control becomes essential.

The airport needed to upgrade its outdated system over a one-year transition period—without disrupting operations and workflows—not an easy undertaking. To address these challenges, the airport operations team turned to FAST Systems, a pioneer in the field of digital safety and security technologies.

“The task was to provide a solution that could manage the access control for 30,000 active identities, including internal airport employees and external personnel doing construction work or services,” says Carsten Tschritter, CTO at FAST Systems. “Our automation methodology involved aligning with processes and workflows, and facilitating the automation of most access security elements without interrupting airport operations.”

To accomplish this, the company deployed its Flow4Secure Process Automation and Workflow Management platform. This provided a customized and fully integrated Identity and Access Management Solution (IDMS) that complies with EU General Data Protection Regulation (GDPR) and Aviation Security Regulations.

AI-Powered Access Control

The AI-powered platform Flow4Secure was developed after FAST Systems saw organizations struggling with a range of security issues. Most prominently: bringing multiple access control operating systems and sensitive personal data together.

Managing identities for an airport requires insights into tens of thousands of personal data records and control of thousands of doors across the facilities. This challenge can overwhelm users and make it extremely difficult to maintain regulatory compliance and internal Standard Operation Procedures (SOPs).

The ability to manage three different Access Control Systems (ACS) through one middleware application enables security operators to use workflow-driven applications to process ID cards and vehicle badges with automated assignment to external companies and internal departments. The solution automatically allocates access areas during the application process and activates access rights when issuing ID cards in all connected ACS.

“It was clear we wanted to have a vendor-agnostic integration platform that connects all the unconnected systems,” Tschritter explains. “By merging all Access Control Systems into one platform, Vienna airport is able to use and analyze personal information to provide better security by applying efficient and well-defined workflows.”

Flow4Secure makes it easy to see all the relevant information about identities and ACS #data points in one place to quickly detect and address any issues. FAST Systems AG via @insightdottech

The core idea behind the solution is to simplify operations for the end user, external subcontractors, and visitors by offering intuitively usable web portals, the control elements, which are adapted to the tasks of the respective user groups. With its simplified user interface, Flow4Secure makes it easy to see all the relevant information about identities and ACS data points in one place to quickly detect and address any issues.

The Flow4Secure system collects, combines, and correlates data from various disparate systems and devices such as card readers, printers, scanners, cameras, and more. The open and scalable system architecture allows continuous development of new applications supporting Vienna airport’s desire to aggregate and digitize all operations going forward.

“It’s a tough job to manage companies, organization structures, orders, personal data, and vehicles in a hierarchy, which requires defining the allocation of access rights,” says Tschritter. “Our task was to provide a user-friendly and process-driven interface that included all necessary information to ID office staff for 100% controlled operations.”

In the case of the airport, the Flow4Secure solution provides a high-availability platform with staging for production, testing, training, and integration purposes. The integration of Access Control and all relevant systems can run in a test environment before uploading it to productive systems. 

The Power of Partnerships Uplevels Security Standards

All of this is made possible through the company’s powerful partnerships with Intel® and Dell. “The relationship with Dell started when Intel was conducting a trial with a railway station in Berlin,” says Bernd Drescher, Vice President of Sales at FAST Systems.“The Dell team was quite enthusiastic to see how we were able to address this customer’s unique requirements. Now we are in the final process of setting up new Dell appliances that integrate the Flow4Secure for process-driven solutions like Yard Management, Asset Tracking, and Visitor Management.”

On the technical side, of course, it’s essential to have a reliable hardware platform. “For us, it comes from this constellation of working together with Dell on the graphic side, our 3D GIS map system,” adds Carsten. “And on the CPU side, it’s Intel, where we have access to R&D and software performance testing resources as needed.”

The Future of Access Security

Having integrated security systems like Flow4Secure—built on Dell, powered by Intel—will become more important as IoT adoption continues, Drescher explains: “The needs for integration platforms are growing at the same pace as the number of connected devices.”

Organizations need to move toward access control system integration and cross-platform interoperability if they want to be able to monitor and improve their operations easily and effectively.

Tschritter predicts that with the rise of 5G, higher bandwidth, and the edge, this will become only a bigger advantage in the future.

 

This article was edited by Leila Escandar, Editorial Strategist for insight.tech.

This article was originally published on June 2, 2022.

The AI Journey: Why You Should Pack OpenShift and OpenVINO™

AI can be an intimidating field to get into, and there is a lot that goes into deploying an AI application. But if you don’t choose the right tools, it can be even more difficult than it needs to be. Luckily, the work that Intel® and Red Hat are doing is easing the burden for businesses and developers.

We’ll talk about some of the right ways to deploy AI apps with experts Audrey Reznik, Senior Principal Software Engineer for the enterprise open-source software solution provider Red Hat, and Ryan Loney, Product Manager for OpenVINO Developer Tools at Intel®. They’ll discuss machine learning and natural language processing; using the OpenVINO AI toolkit with Red Hat OpenShift; and the life cycle of an AI intelligent application.

Why are AI and machine learning becoming vital tools for businesses?

Ryan Loney: Everything today has some intelligence embedded into it. So AI is being integrated into every industry—industrial, healthcare, agriculture, retail. They’re all starting to leverage the software and the algorithms for improving efficiency. And we’re only at the beginning of this era of using automation and intelligence in applications.

We’re also seeing a lot of companies—Intel partners—who are starting to leverage these tools to assist humans in doing their jobs. For example, a technician analyzing an X-ray scan or an ultrasound. And, in factories, using cameras to detect if there’s something wrong, then flagging it and having somebody review it.

And we’ve started to even optimize workloads for speech synthesis, for natural language processing, which is a new area for OpenVINO. If you go to an ATM machine and have it read your bank balance back to you out loud, that’s something that’s starting to leverage AI. It’s really embedded in everything we do.

How is AI and ML started to be more broadly adopted across industries?

Audrey Reznik: When we look at how AI and ML can be deployed across the industry, we have to look at two scenarios.

Sometimes there’s a lot of data gravity involved in an environment and data cannot be moved off-prem into the cloud, such as with defense systems or government—they prefer to have their data on-prem. So we see a lot of AI/ML deployed that way. Typically, people are looking to a platform that will have MLOps capability, and they’re looking for something that’s going to help them with data engineering, with model development, training/testing the deployment, and then monitoring the model.

If there aren’t particular data security issues, they tend to move a lot of their MLOps creation and delivery/deployment to the cloud. In that case they’re going to look for a cloud service platform that has MLOps available so that they can look at, again, curating their data, creating models, training and testing them, deploying them, and monitoring and retraining those models.

“The advent of #OpenVINO changed the paradigm in terms of optimizing a #model, and in terms of quantization.” – Audrey Reznik, @RedHat via @insightdottech

In both instances what people are really looking for is something easy to use—a platform that’s easy for data scientists, data engineers, and application developers to use so that they can collaborate. And the collaboration then drives some of the innovation.

Increasingly, we’re seeing people use both scenarios, so we have what we call a hybrid cloud situation, or a hybrid platform.

What are some of the biggest challenges with deploying AI apps?

Ryan Loney: One of the biggest challenges is access to data. When you’re thinking about creating or training a model for an intelligent application you need a lot of data. And you have to factor in having a secure enclave where you can get that data and train that data. You can’t necessarily send it to a public cloud—or if you do, you need to do it in a way that’s secure.

That’s one of the things I’m really impressed with from Red Hat and from OpenShift is their approach to the hybrid cloud. You can have on-prem managed OpenShift or you can run it in a public cloud—and still really give the customer the ability to keep their data where they want to keep it in order to address security and privacy concerns.

Another challenge for many businesses is that when they’re trying to scale, they have to have an infrastructure that can increase exponentially when it needs to. That’s really where I think Red Hat comes in—offering this managed service so that they can focus on getting the developers and data scientists access to the tools that they would use on their own outside of the enterprise environment, and making it just as easy to use inside the enterprise environment.

Let’s talk about the changes that went into the OpenVINO 2022.1 release.

Ryan Loney: This was the most substantial change of features since we started in 2018, and it was really driven by customer needs. One key change is that we added hardware plugins, or device plugins. We’ve also recently launched discrete graphics. So GPUs can be used for deep-learning inference. Customers need them for things like automatic batching, and they can just let OpenVINO automatically determine the batch size for them.

We’ve also started to expand to natural language processing, as I mentioned before. So if you ask a chatbot a question: “What is my bank balance?” And then you ask it a second question: “How do I open an account?” both of those questions have different sizes—the number of letters and number of words in the sentence. OpenVINO can handle that under the hood and automatically adjust the input.

What has been Red Hat’s experience using OpenVINO?

Audrey Reznik: Before OpenVINO came along, a lot of processing would have been done on hardware, which can be expensive. The advent of OpenVINO changed the paradigm in terms of optimizing a model, and in terms of quantization.

I’ll speak to optimization first. Why use a GPU if you can say, “You know what? I don’t need all the different frames in this video in order to get an idea of what my model may be looking at.” Maybe my model is looking at a pipe in the field and we’re just checking to make sure that nothing is wrong with it. Why not just reduce some of those frames without impacting the ability of your model to perform? With OpenVINO, you can add just a couple of little snippets of code to get this benefit, and not use the hardware

The other thing is quantization. With machine learning models there may be a lot of numerics in the calculations. I’m going to take the most famous number that most people know about—pi. It’s not really 3.14; it’s 3.14 and many digits beyond that. Well, what if you don’t need all that precision? What if you can be just as happy with the one value that most people equate with pi—that 3.14?

You can gain a lot of benefit for your model, because you’re still getting the same results, but you don’t have to worry about cranking out all those digit points as you go along.

For customers, this is huge because, again, we’re just adding a couple of lines of code with OpenVINO. And if they don’t have to get a GPU, it’s a nice, easy way to save on that hardware expense but get the same benefits.

What does an AI journey really entail from start to finish?

Audrey Reznik: There are a couple of very important steps. First we want to gather and prepare the data. Then develop the model or models, and integrate the models in application development. Next, model monitoring and management. Finally, retraining the models.

On top of the basic infrastructure, we have our Red Hat managed cloud services, which are going to help take any machine learning model all the way from gathering and preparing data—where you could use our streaming services for time-series data—to developing a model—where we have the OpenShift data service application or platform—and then to deploying that model using source-to-image. And then model monitoring and management with Red Hat OpenShift API management.

We also added in some customer-managed software, and this is where OpenVINO comes in. Again, we can develop our model, but this time we may use Intel’s oneAPI AI analytics toolkit. And if we wanted to integrate the models in app development, we could use something like OpenVINO.

And at Red Hat, we want to be able to use services and applications that other companies have already created—we don’t want to reinvent everything. For each part of the model life cycle we’ve invited various independent service vendors to come in and join this platform—a lot of open source companies have created really fantastic applications and pieces of software that will fit each step of the cycle.

The idea is that we invite all these open-source products into our platform so that people have choice—they can pick whichever solution works better for them in order to solve the particular problem they’re working on.

Ryan, how does OpenVINO work with Red Hat OpenShift?

Ryan Loney: OpenShift provides this great operator framework for us to just directly integrate OpenVINO and make it accessible through this graphical interface. Once I have an OpenVINO operator installed, I can create what’s called a model server. It takes the model or models that my data scientists have trained and optimized with OpenVINO, and gives out an API endpoint that you can connect to from your applications in OpenShift.

The way the deployment works is use what’s called a model repository. Once the data scientists and the developer have the model ready to deploy, they can just drop it into a storage bucket and create this repository. And then every time an instance or a pod is created, it can quickly pull the model down so you can scale up.

Even if you don’t perform the quantization that Audrey mentioned earlier, OpenVINO does some things under the hood—like operation fusion and convolutions fusion—things that give you performance boost, reduce the latency, increase the throughput, but don’t impact accuracy. These are some of the reasons why our customers are using OpenVINO: to squeeze out a little bit more performance, and also reduce the resource consumption compared to just deploying with deep learning.

What’s best way to get started on a successful AI journey?

Audrey Reznik: One of my colleagues wrote an article that said the best data science environment to work on isn’t your laptop. He was alluding to the fact that when they first start out, usually what data scientists will do is put everything on their laptops. It’s very easy to access; they can load whatever they want to; they know that their environment isn’t going to change.

But they’re not looking towards the future: How do you scale something on a laptop? How do you share that something on the laptop? How do you upgrade?

But when you have a base environment, something everybody is using, it’s very easy to upgrade that environment, to increase the memory, increase the CPU resources being used, add another managed service. You also have something that’s reproducible. And that’s all key, because you want to be able to take whatever you’ve created and then be able to deploy it successfully.

So if you’re going to start your AI journey, please try to find a platform. Something that will allow you to explore your data, to develop, train, deploy, and retrain your model. Something that will allow you to work with your application engineers. You want to be able to do all those steps very easily—without using chewing gum and duct tape in order to get to production.

Related Content

To learn more about AI and the latest OpenVINO release, read AI Developers Innovate with Intel® OpenVINO 2022.1 and listen to Deploy AI Apps with Intel® OpenVINO and Red Hat. Keep up with the latest innovations from Intel and Red Hat, by following them on Twitter at @Inteliot and @RedHat, and on LinkedIn at Intel-Internet-of-Things and Red-Hat.

 This article was edited by Erin Noble, copy editor.

The Power of Omnichannel Experiences with meldCX and Intel®

Listen on:

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Customer interactions have gone digital. Whether shopping online, ordering food, or checking into a hotel—people expect the same level of convenience online or in person. This creates pressures for retailers to implement new technologies and transform physical spaces. But if done correctly, it could have huge benefits beyond the customer experience.

For instance, imagine if retailers could use new digital solutions in stores to track and predict every touchpoint in the customer journey just as they would online? With companies like meldCX and Intel®, it’s becoming more and more possible. In this podcast, we talk about the evolution of customer experiences, what retailers can do to meld physical and virtual stores together, and what a successful omnichannel experience looks like.

Our Guests: meldCX and Intel®

Our guests this episode are Stephen Borg, Co-Founder and CEO of AI technology company meldCX, and Chris O’Malley, Director of Marketing for the Internet of Things Group at Intel®.

At meldCX, Stephen works with businesses to create premier customer experiences powered by AI and at the edge. Previously, he was CEO for device manufacturer AOPEN, where he remains a board member.

Chris, who has been with Intel for more than 20 years, focuses on technology and solutions used in retail, banking, hospitality, and entertainment.

Podcast Topics

Stephen and Chris answer our questions about:

  • (2:54) The evolution of customer experiences
  • (5:28) How retailers are adapting to these changes
  • (7:33) Top retail pain points when working with new technologies
  • (12:51) What a successful retail omnichannel looks like
  • (14 47) How to gain more value from your business
  • (19:17) Making sense of available retail data
  • (21:22) The importance of a partner ecosystem
  • (26:54) Future-proofing your technology investments

Related Content

For the latest innovations from meldCX, follow them on LinkedIn.

 

This podcast was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

 

Transcript

Christina Cardoza: Hello, and welcome to the IoT chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Associate Editorial Director of insight.tech. And today we’re talking about omnichannel customer experiences with Stephen Borg from meldCX, and Chris O’Malley from Intel. Hey guys, thanks for joining the podcast today.

Chris O’Malley: Thank you. How are you doing today?

Christina Cardoza: Great. Great.

Stephen Borg: Thank you. Thanks for having me.

Christina Cardoza: Yeah, of course. Before we get started, I think our listeners would love to hear a little bit more about you, and who we’re about to speak to. So Stephen, I’ll start with you. What can you tell us about yourself and why you started meldCX?

Stephen Borg: I’m the Co-Founder and CEO of meldCX. I—at the time I was working for a few large groups consulting in this area, and we actually designed meldCX on a napkin seven years ago, but the tech was just not there to build it. So we decided around five years ago, when we saw some of the tech emerging that was relevant to us to really start the process. We really built it to—and  that’s what it stands for—is to take legacy and meld it with current technology and create great experiences. And that’s really what we do. We realize our customers have that—some legacy debt that they have to deal with, and we take that, take that information and bring it out and really create an experience for customers.

Christina Cardoza And Chris, welcome back to the podcast. For our listeners, why don’t you refresh their memories, what you’re up to at Intel these days.

Chris O’Malley: Sure. So my name is Chris O’Malley. As Christina mentioned, I’m at the Intel Corporation. I am a Marketing Director for the Internet of Things group. I focus on essentially the technology used in retail, hospitality, banking, and entertainment segments. Primarily, you know, we’re thinking of, how does technology drive experiences in stores? And our whole goal is to support technologies like Stephen’s company, meldCX, works on.

Christina Cardoza: Great. And I should mention that insight.tech, the program and the IoT Chat podcast, are owned by Intel. So it’s great to have somebody from the company representing this conversation today.

Stephen, I love how you mentioned in your intro that when you started the company, that technology wasn’t there, but now we’re seeing the technology rapidly advance today. And in addition to that, over the last couple of years these customer experiences in retail, hospitality—all of these different industries have just completely changed, and have had to change. But, you know, it’s great that we have this technology to be able to do that. Not everybody knows how to utilize the technology or how to change—what’s the right change for them. So why don’t you start by telling us a little bit about what you’re seeing in this customer experience, how you were seeing this evolve across different industries.

Stephen Borg: It’s interesting. I think the whole COVID situation, multiple lockdowns, has really accelerated the curve, right? We’ve had customers and we talk to them, and we still got the same issues where they need to increase the level of service without as many resources, either being budgetary, or they can’t get access to the correct staff. So you’ve got this expectation of needs increasing, while you have less ability to facilitate those customers. So what we’re finding is that when customers do venture out—and this is feedback from our customers or our key enterprise customers—when their customers do venture out, they expect a higher degree of service. They expect the demonstration of cleanliness, right? That they’re respecting the situation and following process. And they also expect a higher degree of engagement.

So how do you do that while either reducing costs or with less resources? And we have customers that can’t even hire to the needs they have to facilitate. So what we’re seeing is that they’re starting to turn to technology that takes the transactional elements or these elements that use up resources that are not customer facing, and redirecting those resources to creating great experiences. And we’re seeing that in hotels—from seamless check-in and check-out; in grocery, transactional items that slow customers down in either self-checkout or in actual checkout—you know, things like processing fresh produce. And we’re seeing that really across the board: what are the opportunities to reduce friction, create automation, but increase engagement?

Christina Cardoza: You mentioned the pandemic, how that sort of accelerated things for businesses. And I imagine a lot of these businesses were forced into a digital transformation they weren’t quite prepared for. So they’ve had to make a lot of changes on the fly to stay competitive. And now that we’ve had some time to look back at those changes that were made, Chris, I’m wondering from your perspective how well have these industries, like Stephen just mentioned—retail, grocery stores, hospitality—how well have they been dealing with the changes that need to be happening and taking on this digital transformation?

Chris O’Malley: You know what? It’s a, it’s a mixed bag, to be honest. You know, all the trends or the challenges that retail was facing prior to COVID, they still exist. You know, three years ago we were talking about frictionless—the millennials, digital natives, have been growing. They don’t like to talk to people. So it’s been self-checkout, wayfinding key, that type of thing—engaging with technology as opposed to humans. There’s been inventory and supply challenges. There’s been this—you know, there’s been some increasing theft, there’s been the need for inventory. But the reality is they were, we were, kind of at the slow level of growth. You know, it was nice to have this technology, but it wasn’t absolutely necessary.

What we found is the people that, or the companies that, started to invest in this type of technology prior to COVID, now that COVID’s hit us the ones who invested previously are doing really well. The ones who don’t are trying to build this entire technology structure without any previous investment. And they’re struggling greatly with it. You know, the biggest thing Stephen mentioned—it’s amplified or accelerated the trends. That’s what we’re seeing. It’s absolutely accelerated. And I think it’s because of this worldwide labor shortage. There’s a lot of jobs that they literally cannot hire for, or if they can hire for it, they have to hire at a wage rate that quite frankly they can’t sustain profitably. So they’re looking at, how can I automate, how can I use insights of computer vision to give people the experience they want?

After, you know, during COVID, many companies went in with almost no one-to-one digital contact with their customers. You know, for two years we did online ordering, we did mobile ordering, we did curbside pickup. So companies now have this massive relationship with customers digitally. If they don’t know how to deal with that data, if they don’t know how to personalize with that data, they’re really struggling. So we’re really seeing the companies that invested prior to COVID taking off, and the other ones are playing come-from-behind, really struggling to put the IT infrastructure in place, how to use computer vision and things like that, to make it really valuable.

Christina Cardoza: Stephen, I imagine you’re working very closely with a lot of these businesses that are struggling with these things, like infrastructure or to adapt to these changes. What have been the top pain points or challenges that you’ve been seeing, and how can they address these now going forward? Or how can they implement these new technologies and work with meldCX to go down a better path?

Stephen Borg: Yeah, I think there’s a few areas. One, when we started out with meldCX, I mentioned earlier that we took a little bit of a pause and made sure the technology was available. The reason for that is when we went out with meldCX, we wanted to create a solution out of the box where you could simply plug and play. You don’t need data scientists, you don’t need a massive team to stand up what we see as the most common aspects of computer vision. So, analytics tracking, inventory, those types of things you can just plug and play out of the box and get going. So that was one of the first things we wanted to do.

And then, secondly, we wanted to create a method where you can take—and we found a lot of customers in this state where they had to furlough some of their team members during the pandemic and couldn’t get them back—so we found a lot of customers that were, say three-quarters through a project where they had existing investment in some models. You can take meld and pump those models down meld, because we use OpenVINO, and mix them with other models we have to complement. So we’ve got this thing called a mixer, and it blends models together and gives you an outcome.

And then, thirdly, we actually created a service that if you have a specific use case or a problem you’re trying to solve, we can go ahead and create that model for you. We have a synthetic data lab because we face the same issues of getting content or getting the right video to create these scenarios. So we have a synthetic data lab.

So what we’re finding is that now that the level of engagement is cross business, customers are very invested, and we’re finding that we have—unlike the past, where we might have an IT stakeholder or a marketing stakeholder—we have everyone at the table, because they see the benefit and they really drill down into understanding what they need. So we sort of advise customers to start with computer vision, start with out-of-the-box modules and then go from there, because they really don’t, most of them don’t comprehend the power of it. And what we try and explain to customers is that this is an amplification of your existing capability, right? So you put your machine vision in, or your models, and it either feeds you data, or automates a function, or creates a cause and effect to enable your staff to do more with less. So really we say, start at what’s out of the box, experiment with it. And then we have our team work with them to try and really drill down into that problem-solving phase for future growth.

Christina Cardoza: Chris, is there anything you wanted to add about challenges that you’re seeing, and how tools and technologies today can address those?

Chris O’Malley: You know, yeah. The big thing, and I think Stephen addressed it already, is especially with the name meldCX, melding the old with the new technology. So computer vision is great. It’s a great technology and it really can, you know, figure out how to deploy the people where you need them most. But what’s exciting about some of the technology that meld offers is, say you don’t want to do the full investment into a new camera setup right now. You may have security cameras already there. You know, meldCX could take those feeds right away, load some models on that, and get basic data just from the get-go. So you don’t have to do this massive investment to start to get data.

Now, what we find is once customers actually start to realize that, and they see what the computer vision can provide, then they’re interested in investing further. And they say, “Wow, you know, instead of just looking at operational-type stuff—is there a liquid on the floor that I need to clean up? As you know, 30 people enter a bathroom and now I need to go clean the bathroom. And that slot machine’s been used 150 times, I want to clean it now.” Or something like that. They start to add it to more and more. They see the power of it, so they start to add more and more and more. And what I find about that is it’s great, because this is, this type of technology, is not coming to an end.

We’re at the beginning right now. You know, I wouldn’t be Intel if I didn’t say Moore’s law, you know, we’re doubling the performance of our technology every two years. The corollary to that is that technology becomes cheaper every single year, too. I mean, technology is reduced by more than half every two years. So you start adding compute to everything. And when you start adding compute to everything, there’s an immense amount of data. So you need technology like this to start making insights, those actionable insights that are valuable for your company.

Christina Cardoza: Now, we’ve been discussing sort of how these physical spaces can transform themselves. Grocery stores, restaurants we’ve mentioned, but it’s really a bigger piece to this, is the online aspect of it. I think sometimes we tend to think of e-commerce and retail, physical retail, as two separate things, but today they’re sort of merging and melding, like you explained. So Stephen, when we talk about a retail omnichannel experience, what does this really mean? And what is it touching?

Stephen Borg: Yeah. And I think it’s really touching on every aspect of your customer’s journey, right? I think there’s been a lot of focus on mobile, a lot of focus on web, but connecting mobile to web in a single, seamless experience has not been something that we’ve seen when it comes to connecting those two to an in-store or a physical contact point. And often we find they’re completely different experiences, right?

So what we’re finding is by using data or connecting those dots—one thing about meldCX, we not only have our base that gives you computer vision, but we have these modules that you can load on existing devices, legacy devices, and new, that allows you to connect those dots.

So for example, connecting that computer vision to an event that occurs locally—it could be providing access to a digital locker based on your token, or having that seamless experience of you’re doing it anonymously, but having your last order come up on the screen. It doesn’t know who you are, but it just knows who your last order—what your last order is. Or when you go to a self-service device and it knows you’ve used it multiple times, it doesn’t go through all the instructions again. It just goes to your last left-off point. So all of these little, subtle things that are done anonymously but create convenience and context is what we’re starting to see. And it seems to be best practice, and we’re seeing some good results from it.

Christina Cardoza: So how would you suggest businesses get the best of those—both worlds, and really connect the dots? How do you start on this omnichannel journey and make sure that you’re providing the right value to each platform, and those platforms are all connected together so that you’re getting even more value into your business?

Stephen Borg: I think we start with, and as Chris was saying, we start with simple measurement—understanding your environment the best you can and trying to connect those contact points.

So we had a recent customer have a scenario that no one anticipated from our data. They’re a large electronics retailer, and they sell CDs and DVDs. And you think that’s a dwindling area, right? Netflix and online streaming. You think that would be something that a retailer wouldn’t give much credence to. But what we’re finding is, especially during holiday season, when people travel, in those travel locations, that they might not have the data that is required, or they might not have the setup in an Airbnb that is required for them to use their own Netflix or their own Hulu. And we’re finding that people will go into these destinations and look for CDs and DVDs. And typically won’t buy them, they’ll go buy something like an Xbox or an Apple mini or gaming. So we found that in this client, although all the online data indicated they weren’t interested in these things unless they had that destination or that base still there, they wouldn’t cross sell to these other items, right? And when they reduced it, you thought there wouldn’t be an impact because it’s not a high-selling area, but when they reduced it, their peripheral sales reduced. And without that data, they would never know that.

So they were relying on online data to dictate what behavior is in-store. And that simple measurement task indicated that if they don’t keep this area, they don’t get peripheral sales, because it’s actually a destination for browsing, especially when going to regional areas. Or those customers are destination shopping as they’re about to go away. And we actually increased the sales in one and it increased peripheral sales. So that type of data, you wouldn’t pick it up in sales data. You wouldn’t pick it up in online data, but you’re picking it up by using that anonymous tracking data, hotspot data, and associated sales data.

Chris O’Malley: If I could interject here, what he references is pretty interesting. So, in the last 10, 15 years, advertisement online has been really eating up a lot of the market share. And a lot of it’s been because you’ve been able to track behaviors. If you showed an advertisement to you, Christina, and you clicked on it, they’d know that it had some sort of an influence and they could pay for that.

When you go in-store, there was none of that information. There was no attribution. There was no success. Did my digital advertisements in-store do anything? But with the technology that meld is offering, or computer vision is offering, you now have that ability to figure out, is my campaign working? Was it actually influencing the people? Were they happy with it? Were they unhappy with it? Were they engaged with it? And you can change that. That’s never really existed before until you have the advent of computer vision.

And that’s pretty powerful, especially for a retailer, because you can now start to monetize some of those things as well, but you can also figure out, how do you change your display? How do you change the technology you’re using? How do you change associate activities? All those different things, because you’re going to pick up all this powerful data, which you never had access to before. You know, I’m in marketing, we always say 50% of the money that we spend is useless, 50% is valuable. We just don’t know which one is which. With the technology that Stephen has, we’re starting to be able to figure that in-store. What’s valuable? What’s not. And then you can really start to target these things that make them a lot better.

Stephen Borg: For example, we have another retailer that’s taken their front-end bay, or the bay that you see when you walk in, and they monitor it and monetize it based on you, the customer, touching the product that’s on the shelf. So instead of paying to be in that bay, now they pay for every click or every touch of individual product that’s in that bay. And then monetizing like a website.

Christina Cardoza: We’re talking about massive amounts of data that we can collect now—custom behavior, how they’re moving, and what they’re doing online—connecting those dots together. Now that we know how to collect all of this data and what we want to be collecting and looking at, how can we make sure that we’re making sense of the data? How do we analyze it and process this data and make sure that it’s accurate to make more informed decisions? Chris, I’ll start with you.

Chris O’Malley: Sure. Yeah. You know, that’s the—it’s the mixed blessing of data. You know, there’s a huge amount of data out there that goes unused, and it’s very valuable. So I think one of the first things that has to do with any retailer, and a lot of legacy retailers in particular have very, like, siloed data. So, the POS data is here and it never shares it; there’s kiosk data in-store here; there’s mobile data over here; there’s online data over here. The data is never shared between them all. That doesn’t do a lot of valuable—we desire personalization. You only know a little snippet about them in each of the separate activities.

When you move to a modern kind of an edge, or a modern microservices-based architecture, where you have kind of a shared data or a data lake, and every single one of these experiences can access that same data, that’s when you can start to make sense of all of that data. The other thing that’s incumbent is you’ve got to make sure that you standardize the data. You know, how do you store the data so that every app that you’re running on top can kind of access the same set of data, understands exactly the importance of that data, and then figure that out?

And then the other thing that frankly starts to happen—and Stephen’s already referenced this a little bit—with big data analytics, and we’re at the advent of that as well, you could start to look at pieces of data that outwardly to us make no sense. There’s no correlation, there’s no relation. But if you see it over and over on big data, you can actually make the correlation and figure out that actually, yes, this product A does influence product B. And you can start to set that up that way. Those are things that you don’t even know about, but it all, it all comes down to how you set your architecture up—make sure that data is shared by all the different apps.

Christina Cardoza: And I imagine this data we talked about coming from cameras, we’re doing online data, we’re watching customers in-store, tracking their movements, their patterns, seeing what attracts them, depending on where products are placed. So I can imagine that you’re not just using one solution, or you can’t just do it alone with one company. Stephen, are you working with partners like Intel? How do you use the ecosystem that’s available out there to make this possible for businesses?

Stephen Borg: Yeah, and I guess we see there’s multiple types of data. So there’s some data that are—that is immediately actionable. So for example, we work with a large hotel chain, and when their room keys are out on their vending or on their kiosks, or there’s something that needs to be filled, we actually push that data through intermediate alert. So they use like a Salesforce communication app with their staff. We will notice things. We don’t necessarily store that data. We’ll just have an event that we executed that command. So sometimes we don’t store the data; it’s immediately actionable.

Or, as Chris mentioned earlier, we feel that front desk has hit a threshold and it needs to be cleaned, right? So there’s that type of data. And there’s also that historical data or multisource data that you were trying to get insights out of. In that case, yeah, we do. We work with Intel from an OpenVINO perspective and to make sure that our models are optimized, they can coexist with other applications. And also that we’re not—one thing that we found with OpenVINO in particular, it means we need less heavy infrastructure at the edge, which significantly reduces cost. So that’s a great aspect.

And we work with partners such as Microsoft, Google, Snowflake, to provide customers the data set in the way they wish to consume them. In addition, we have a very comprehensive—and this is one of the things that we initially struggled with—we were providing the data, and customers did not have either the resources or the understanding of how to mine that data effectively.

So we have a comprehensive suite of dashboards that you can use depending on your role in retail. So if you are operational, you can use the operational dashboards; or if you’re marketing or product, you’ll use those bot dashboards. In addition, you can feed your existing data lake or data warehouse. So what we’re finding is customers have a hybrid. They use our reports, which are customizable, and they’ll feed their main data source and start to do integrating into their reporting system.

So one of the aspects that we found, or one of the blockers that we found, is that we didn’t want customers to need to make a big change to their data warehouse or data lake just to experiment with the technology, which we found that being a blocker because they’re really resource poor. So you sign on, pulling your persona, and I’ll give you the data that’s relevant to your role.

Chris O’Malley: So, one thing I’d like to—you know, Stephen referenced the importance of real-time actionable insights. So, one of the things we’re seeing is that if you go to the cloud, you’re going to encounter some element of latency. And for some things that’s perfectly fine, latency doesn’t matter. And those things are perfectly fine to store in the cloud or put into the cloud. But a lot of activities that happen at a retail store, you may want to have absolute real time and you need to do it at the edge.

And the other thing that’s happening—we already referenced that compute is getting so cheap that they’re adding more and more compute. So there’s smart building technology, there’s IoT sensor data all around these tools, there’s computer vision technology. There’s lots of things like that. So your data is actually growing significantly faster than the cost of your connectivity to the cloud is reducing. So you really can’t—in theory you can run all of this video data, computer vision, you can run it in the cloud. The problem is your cost of connecting to the cloud to make those insights is going to explode. And it’s going exceed the value. You have to do this type of stuff at the edge. And that’s where Intel with OpenVINO is very much optimized for efficiently optimized or efficiently using the edge capability to do the inference and get those real time analytics that you need. That’s where we’ve been focused. And that’s where we see really important—

I think some of the data we’re seeing is that data is going so large that about 95%-plus of the data is actually going to be dispensed with and disposed of at the edge. And only a certain amount, let’s say 5% or less, is ultimately going to go to the cloud for permanent storage, for analytics and things like that. But it’s key metadata. The rest is going be processed at the edge. So the edge is absolutely growing rapidly.

Stephen Borg: And that’s what we’re finding. We don’t send any video to the cloud, so we strip out everything we need at the edge. And we do that for two reasons. One, to reduce the cost. And, most importantly, we do it because we abstract all content at the edge for privacy reasons. So that way there’s no instance of any private data going into the cloud or going through our system. It’s all stripped out by that edge device and OpenVINO.

Chris O’Malley: Yeah. And with GDPR, that is really important. So any type of anonymous analytics, anything like that, has to be deleted at the edge. So you can just—you just gather the data that’s important, but no images, no nothing, is ever sent to the cloud. It’s not allowed to be sent to the cloud.

Christina Cardoza: Since you’ve both brought up cost being an aspect of this, Chris, I want to come back to something you mentioned earlier, which is you have to look at your infrastructure and change things and consider the legacy technology that you do have. But I know some businesses can be worried about introducing new technologies to their infrastructure, whether or not it’s going to be a smart investment in the long run. So how can they ensure that the technologies that they’re using, the infrastructure that they’re changing, is going to meet their needs today, but also be able to scale to meet their needs tomorrow?

Chris O’Malley: Got it. So, yeah, I mean, from an architecture standpoint, I mean the first thing that you—you have to be future proof. So whatever you do, it has to be future proof. And that’s why we think that you need kind of an open—what we call a microservices-based architecture. The siloed architecture, which worked well in the past, it fails as you continue to add new technologies and new technologies and integrate new data; it becomes so difficult. The cost of integration is just going to overwhelm you.

If you build your modern—and you could start it, by the way, from your online, and then you can add your mobile, and then you build it down. Most people go down until, like, either the restaurant or the retail level. The last thing to be integrated into that modern architecture is probably the point of sale. You know, that’s kind of one of these sacrosanct things, but when you build everything else, eventually the POS can be sort of an app right on that as well, too. And then, because you’ve got the data infrastructure set up, everything like that, as soon as I add a new technology, it’s much easier just to drop it in. It almost becomes like a new app placed on top of an existing infrastructure. And it’s very easy to launch those new apps, and you can really get to market a lot quicker than if you had to integrate in the old ways with the silo technology.

Christina Cardoza: And we’ve been talking a lot about the business benefits that these organizations are getting by introducing these technologies and creating these omnichannel customer experiences. But Stephen, I’m wondering, how are the customers dealing with all of these changes? What are the benefits that they’re getting from digital signage or video analytics?

Stephen Borg: What we’re finding is that if it’s done with privacy in mind, that customers respond to it quite well, in that either they’ve had a less—a frictionless experience; they’re getting through the checkout quicker, or the staff member has information that’s relevant to them at the time, or maybe tailored. So they’re getting content that’s tailored to either their persona type or based on their frequency of visits. All this can still be done anonymously, but it can create context or awareness. So we’re finding that if you’re providing a frictionless experience, that staff member is not just focused on the transaction. And we found this—we do some financial institutions as well, where we found those staff members could have more of a conversation rather than focusing on some of the transactional aspects—that increasing customer engagement is welcomed, but they still want this degree of knowing that it’s a clean and safe environment where it’s contact—physically contactless where possible, but there is still a rich need for some engagement. And that’s what we’ve found. One of the aspects we’ve found from the pandemic is that now some shopping has become even more social because some countries have just still have lockdown restrictions, and when they do get out they want to be engaged.

Christina Cardoza: So, unfortunately we’re nearing the end of our time today, but Chris, I wanted to give you a chance to add any final thoughts or key takeaways to our listeners today as they go on this omnichannel customer-experience journey, and continue to refine it in the years to come.

Chris O’Malley: Got it. You know, I think the critical thing that we’ve mentioned already is that customers want this frictionless experience. They want the personalized experience, that they can get someone online—they want that in-store, but they also still like the socialization in-store. You know, that type of stuff is still very important, especially in today’s environment. And it can be done with this technology. It can absolutely be done.

You can have the great parts of, like, shopping that everybody still loves, but you can bring in that goodness of online through all of these tools. But from a retailer or a casino venue or hospitality venue, you also have this ability to replace human resources in some instances. There is the worldwide labor shortage that we’ve referenced is real. These venues are struggling to hire people. They’re desperate to hire people. I’ve—many restaurants, they can’t fill up all of their seats because they don’t have enough staff. The same thing is happening in hospitality and entertainment venues. If you can take some of those things that are, perhaps were, done by humans or still could be done by humans, if you can automate that, if you can replace that with compute, then you can serve, you can hold back your valuable human resources to do the stuff that people really like, which is the interaction. It’s the talking, it’s the setting up your experience.

That’s what you really need to do. Focus your human resources or your human talent on that interaction that people really like to really drive experiences, and all of that stuff in the background, all the operational, all the inventory, all the insights and stuff, set that up with computer vision and automation that excels at it. That’s what it’s really good at.

Christina Cardoza: Yeah. That’s a great point. You know, as an end user to some of these things—grocery stores, checking into hotels, ordering online food to go—I’m already seeing such a huge benefit with the inflammation of these technologies. And I can’t wait to only see it advance in the years to come. So with that, I just want to thank you both for joining the podcast today.

Chris O’Malley: Alrighty. Thank you very much. Have a good day. We’ll see you, Stephen.

Christina Cardoza: And thanks to our listeners for joining us today. If you liked this episode, please like, subscribe, rate, review, all of the above on your favorite streaming platform. And, until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Omnichannel Shopping Twist: Bringing Online In-Store

Sometimes you just want to walk into a pharmacy, grab what you need and leave. That’s not always possible when you have to rummage through shelves or stand in long lines at the counter. But thanks to digital innovation, a trip to the pharmacy has become a more satisfying, novel experience.

That’s just the case at CarePlus Pharmacies in Ireland, where a new digital retail solution combines interactive screens and backroom robotics to enhance the shopping experience while helping stores grow sales, minimize shrinkage, and optimize staff.

The solution, developed by ScreenVend, a global software and technology company, offers an omnichannel twist. It brings the online shopping model to physical stores. Instead of searching for products on the shelves, shoppers find what they need on interactive touchscreens. A retail robotics system picks the items and dispenses them at the POS station.

“What you would normally do at home, you can now do in-store,” says Simon Healy, chairman of ScreenVend. “Customers can use technology in an immersive way and have the benefit of instant fulfillment. The company calls this hybrid approach of combining digital with physical shopping ‘ClicksiNBricks’.”

Digital Retail Delivers Personalized Service

Unlike many technology solutions, ScreenVend was the brainchild of non-tech people. Healy, a retail veteran, saw a specific need in pharmacies to offer customers the speed and convenience of online shopping while freeing up pharmacists and associates to provide personalized services. He studied the pharmacy ecosystem to figure out how to improve it. Healy noticed they are highly transactional environments, but at times customers need individual assistance.

Retailers were using digital screens in-store, but mostly for signage. Then it occurred to him to replace shelves with interactive displays in conjunction with instant robotic delivery instore. The displays don’t replace staff, but rather make pharmacists and associates more available to help customers. “What pharmacists all over the world are very good at is patient interaction and positioning themselves in the community as professional health advisors,” Healy says.

As doctors have gotten busier over the years, people rely more on pharmacists. “In our particular business, what we want to do is make greater use of consultation rooms to assure that pharmacist’s knowledge or the technician’s knowledge is used to its best.”

ScreenVend displays don’t replace staff, but rather make #pharmacists and associates more available to help #customers. ScreenVend via @insightdottech

Interactive Digital Displays Inform and Upsell

Now, when shoppers walk into a ScreenVend-equipped pharmacy, they come face to face with the digital displays. For the uninitiated, help is available from an associate. Otherwise, shoppers head to the screens to make their purchases with taps and swipes (Video 1).

https://vimeo.com/712065431/980481db17

Video 1. A ScreenVend-equipped pharmacy in action. (Source: ScreenVend)

They fill their virtual shopping carts and then tap a prompt to complete the transaction. A paper slip with a QR code comes out of a slot by the digital display. The QR code instructs the POS system to complete the order, which is pulled and fulfilled behind the scenes by a robot with capacity for 25,000 boxes.

As shoppers pay for their purchases, the items are automatically dispensed through a conveyance at the POS. “We managed to create a complete new retail experience for the customer and for our pharmacists,” says Healy.

In the process of improving the shopping experience, the solution also helps pharmacies boost sales and reduce shrinkage. “Within the software itself, there are numerous upselling and cross-selling opportunities. The customer can see what products are complementary, and that is very important.”

The unique use of robot technology helps address shrinkage, a nagging problem in retail caused by theft and errors. In addition, centralized software management cuts down on errors when setting and updating prices.

ScreenVend can also add an element of theater. In store environments where the robots are visible behind glass, customers can enjoy watching their orders fulfilled. The solution makes it possible to operate in a smaller footprint than in a traditional store, but it can scale. More screens can be added as traffic increases in a store, Healy says.

Digital Retail Delivers New Opportunities

Although ScreenVend was developed for pharmacies, it is suited to other retail environments. For example, at shopping centers, the solution can enable popup stores. “With the flick of a button, the software can turn a gadget store in the morning to a bookstore in the afternoon,” Healy says. “ScreenVend would also work in small spaces within box stores in a store-within-a-store approach.”

ScreenVend platform uses the Intel® NUC, a rugged PC with a small footprint to power the digital displays and Intel® processors for the dispensing robots, touchscreens, and POS stations, and tablets.

The relationship with Intel is key. “We’re effectively a technology startup and I think we’ve been very well treated by Intel in a very personalized way,” says Healy.

That level of attention should prove valuable as ScreenVend expands into other areas of retail. As part of its pitch to retailers, ScreenVend will offer services around the technical aspects of implementing and running the retail robotics.

“We will also offer design consultation and customization of digital displays to ensure that each brand has a unique look and feel,” Healy says.

He hopes retailers will grasp the value of the solution, which lets customers still have that brand relationship through physical in-store experiences while using all the digital tools that are available online. That’s certainly a goal of ScreenVend, and that should be the goal of many retailers that act in the online and retail space.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Self-Service Kiosks Transform Digital Banking

Are we headed for a cashless society? The question has fueled debate for years but the majority of consumers and businesses don’t think it’s in the cards. M3 Technology Solutions (M3t), a turn-key kiosk and management system solutions provider, isn’t taking sides. Its self-service multifunction kiosks let consumers decide for themselves.

Think of M3t kiosks as ATMs on steroids. They provide basic ATM withdrawal and deposit functions, and add digital banking services such as bill breaking, loading cash into digital wallets, converting cashless payments to cash, and short-term credit lines.

“A rich set of functionality makes the technology appealing to users,” says M3t Chief Operating Officer Dylan Waddle. “It gives the consumer a significant amount of flexibility, if you will, in what they’re doing with their currency.”

The financial kiosks have been popping up in places like convenience stores, gas stations and casinos. They are especially popular with retailers that run unattended convenience stores. “People can put cash in, either load a prepaid card, or get a QR code to go buy merchandise, come back and get their change,” says Waddle.

Banks, of course are also interested. They’ve been placing kiosks in lobbies and other locations and may eventually replace their ATMs. “Banks are looking at self-service technology in a bigger way,” says Waddle. “Larger financial institutions like JP Morgan are starting to push more and more functionality to the self-service kiosk as quickly as they can.”

Whether you’re a bank, casino, or retailer, a self-service kiosk is a tidy way to offer a digital experience to consumers looking for convenience, flexibility and speed. The technology also helps businesses optimize staff utilization and improve cash management.

Digital Banking Services Drive New Opportunities

M3t got into the kiosk business a decade ago because it saw an opportunity in the trend toward cashless transactions. “Our core focus from inception was on back-office management systems, specifically businesses that manage a lot of physical currency across multiple sites,” Waddle says.

Rather than outsourcing manufacturing, the company decided to set up is own factories in the U.S. “We determined that in order to be successful we were going to have to become fully vertically integrated, so we started manufacturing kiosks.”

So now, when businesses buy m3t’s backroom management platform, they can get the kiosks and infrastructure that goes with it. The platform allows businesses to automate their cash management. With its software, businesses can track the movement of cash at every step. A bank, for instance, can track cash as it flows in an out of the vault, cash dispensers and tellers.

The M3t kiosk gives the consumer a significant amount of flexibility in what they’re doing with their #currency. M3 Technology Solutions via @insightdottech

Self-Service Kiosks Everywhere

Waddle believes the self-service banking kiosks will become ubiquitous in the near future. For example, M3t is looking at deploying systems at entertainment venues and universities. “We’re getting ready to put kiosks in Busch Stadium (St. Louis Cardinals), American Family Field (Milwaukee Brewers), and Notre Dame.”

Outdoor spaces are a major focus. Until now kiosks have predominantly resided inside, but pretty much anywhere people need to access cash, whether in legal tender or digital form, is a good place for kiosks. And their appeal to consumers is sure to increase as more functionality is added.

Already, consumers can have a range of interactions with the machines. For instance, a kiosk will break a $100 or $50 bill into smaller denominations. Through a service called UltraCash, users can set up a six-day line of credit tapping their bank funds. Another feature, called UltraCard, lets users move cash to their prepaid open loop cards such as Mastercard, which can be used anywhere Mastercard is accepted.

“That makes us pretty unique because you can have more access to cash from our terminals than at a standard ATM,” Waddle says.

For consumers who still prefer cash, the kiosks lets them use even cashless stores. “They come in and put cash into our kiosk. They get a QR code or they get the funds loaded to their phone. They go get what they want, come back, and get their change at the kiosk—and their receipt if they want a physical receipt,” Waddle says.

Secure Systems with Digital Technologies and Rugged Designs

With all the functionality they deliver, M3t’s self-service kiosks store a lot more cash than a typical ATM. “Because of that, they’re typically made of 12-gauge steel—heavy-duty, hardened solutions that get bolted to the ground, bolted to the wall. It may not even come out if you tried to drag it out with a car,” says Waddle.

Another security layer involves sirens and alarms that go off at the terminal while sending alerts through the cloud to a management console. Administrators can also access the management software from a mobile app. And the technology is compliant with PCI (Payment Card Industry) standards and follows cloud infrastructure security protocols.

To further enhance security and convenience, Waddle says, M3t is working with Intel® to add biometric recognition. Eventually, accountholders won’t need a card or phone to use a kiosk.

The new functionality will expand M3t’s relationship with Intel, which already provides high-performance processors and other technologies for the kiosks. “The overall relationship with Intel has been extremely strong and we continue to evolve what we’re doing with them,” Waddle says.

Intel will remain a key partner as M3t executes its ambitious vision for self-service kiosks, providing cash and cashless options to consumers wherever they may be. “I feel like you’re not just going to see a kiosk in every location, you’re going to see one in every single room, says Waddle. Companies see them as core providers that are supplementing employees in a really big way.”

For more on banking kiosks, listen to Self-Service Tech Trends in Retail, Banking, and Hospitality.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

What It Means to Be an AI Developer in 2022 and Beyond

Picture fast-food restaurants being able to tailor the kinds of food they keep on hand depending on which cars flow through the drive-thru line. Picture advanced cameras being able to detect problematic porosity in finished car parts. Or radiologists who work with a virtual assistant to sift through X-rays and surface troublesome ones for a second look.

The Importance of AI Development Today

These operational improvements all drive on artificial intelligence (AI), which is seeing an explosion in use cases in practically every industry. Where there is data, there are opportunities for efficiency—and even more opportunities for AI.

A confluence of technology shifts—the growth of computing power and the development of better communication infrastructures like the 5G network—is fueling this AI revolution.

But while AI transformation might be firmly underway, a shortage of AI developer talent might hinder its execution at scale.

“If AI is applicable to every industry, then that means we are going to need a whole lot more developers to acquire AI skills quickly,” says Bill Pearson, VP of the Network and Edge Group, and General Manager of Developer Enabling for the Internet of Things Group at Intel®.

A recent survey on the state of AI in the enterprise found that AI developers and engineers were among the top talent companies needed most. The pattern was uniform across all enterprises, whether they were seasoned, skilled, or just starting out with AI deployments. But unfortunately, much AI knowledge remains the purview of a small set of developers, according to Pearson.

This gap in AI talent might be because developers still have a few barriers to overcome before they can take up jobs in the field.

Top AI Developer Challenges and How to Overcome Them

1. Limited AI Knowledge

To create AI models, developers must first understand what AI is and what it can do. But all too often, existing documentation is geared toward experienced professionals, leaving out beginners. To infuse new AI developer talent into the pipeline quickly, the playing field needs to be leveled fast.

“Developers will be right at the center of #AI, pushing the envelope with #technology and creating some very interesting solutions that will probably blow all our minds.” – @billpearson, @intel via @insightdottech

As developers get started on an AI learning journey, they will need better documentation, hands-on training, and tools that are easy to use.

“In the past, AI has been the domain of experts. We really need to make materials more accessible, we need to democratize AI,” Pearson says. “We have to make it easy for developers to find the right materials at the right time, to make it easier for them to access the information they are looking for.”

For instance, Intel offers a variety of AI training videos and documentation for those interested in answers to specific questions about AI development. These materials are tailored to various levels of AI expertise so beginners can start exploring, while advanced developers can find answers for more detailed use cases of their applications. Intel also offers a list of prerequisites to help developers get started on their journey.

Developed by developers for developers, the Intel Edge AI certification course teaches core AI concepts and how to apply different use cases at their own pace. It features free tools and code samples, open-source resources, and a library of pre-trained AI models. Developers can study the code in these models to see how it can apply to their own work.

2. Too Many Choices

But with all the AI tools and resources available to get started, it can be very easy for developers to get overwhelmed. It is not always clear to them which tool is going to be the right one for the job. And then there are concerns whether the tool will be a solid long-term investment. Developers have to figure out what is going to be the right hardware, software, AI models, and algorithms to serve all their needs over time.

Part of democratizing AI and improving access to necessary tools is making them part of developers’ everyday workflows. “We’ve got to be developer-first in how we deliver these tools and do so with open and flexible platforms,” Pearson says.

Developers should pay particular attention to interoperability and open ecosystems that support the various tools they’ve already come to know and love.

For instance, the Intel® Distribution of OpenVINO Toolkit supports other popular AI frameworks such as TensorFlow, Caffe, PyTorch, and ONNX—so developers don’t feel locked into one choice.

In addition, the AI toolkit is easy enough for beginner developers to get started, but advanced enough to help developers scale their efforts and boost their AI skills. “It’s a toolkit that helps developers deliver faster and more accurate results using AI and computer vision inference,” says Pearson.

Since OpenVINO is open sourced, it has a strong developer community around it—enabling developers to get involved, take part in improving the platform, and leverage some other improvements that the community has made.

“That’s a great way for developers to decide not only what the best tools and frameworks are but participate in creating some of the best tools and frameworks in the industry,” says Pearson.

3. Building AI Models

Once developers have obtained the tools and resources necessary to get started on this journey, they face challenges related to developing and deploying AI models. For instance, do they have the right data to start building the model? Is it in a useful state or format? How are they going to apply it to their use case?

“Getting data to the right point where you can actually do something useful with it is one of the big challenges we have,” Pearson says. The origin of data sets and the how’s of using them are additional concerns for data scientists who need to ensure AI models are scrubbed clean of bias.

In keeping with the promise to a developer-first approach, under the hood of OpenVINO is Model Zoo, a set of pre-trained models that developers can use. The set includes examples that are developed using industry-standard frameworks like TensorFlow, Pytorch, MXNet, and PaddlePaddle. Basing AI code in frameworks that developers already might be using checks off the developer-first approach so they don’t need to move off their workflow to benefit, Pearson explains.

Data security is another concern that needs to be addressed for AI models that work in the cloud and at the edge. Developers and data scientists need to verify data sources and ensure ethical AI model development. “It’s not just about the application but the people and processes used to come up with the data, the algorithms. All that is part of building an ethical and equitable AI solution,” Pearson says. The OpenVINO toolkit offers an additional layer of data security with a data protection add-on.

“When you use the security add-on, it just gives you a way to have secure packaging for those models and then to execute them in a secure way. We’re able to have users who have appropriate access to the models, they’re running it within some assigned limits, and they can even run these in KVM-based virtual machines,” Pearson explains.

4. The Edge-Cloud Dilemma

Then there is the question on where they will store, process, and analyze their data. According to Pearson, AI is a different ballgame from traditional software development, which is naturally leading to changes in how developers work.

Traditionally, most IoT devices have been proprietary, embedded fixed-function devices. But recently with the increased adoption and maturity of the cloud and cloud-native technologies, containers and orchestration have become more pervasive. As a result, developers are moving toward a software-defined, high compute development environment leveraging cloud-native technologies for IoT and AI development.

AI in the cloud is being used for heavy-duty crunching of cloud-based machine learning models, while the edge provides new opportunities to analyze AI models at the data source.

“If you’re an embedded developer building solutions in the past and now, all of a sudden, you’re trying to figure out how to capture and make sense of data using AI at the edge, that’s an entirely new paradigm,” Pearson points out. Cloud-native development is changing the landscape for developers, who have to understand the AI use cases for both cloud and edge and build models accordingly.

It’s all about understanding your business goals and objectives, according to Pearson. “Depending on the KPIs that developers have, and what they’re trying to achieve, we can help determine where the best place is to run their AI workloads,” he says.

Cloud computing offers advantages in terms of cost and scale. If what the business is trying to achieve doesn’t require secure data on-site or low latency, then cloud might be the right way to go. If there are concerns with bandwidth, security, and scale, then developers might want to consider the edge.

“I get to choose as a developer which location makes the most sense to do which task. And again, I get to scale the compute resources that I need from virtually infinite in the cloud, to perhaps much more limited, whether it’s power or performance at the edge, and I can still get the AI that I need to achieve the business goals that I’m trying to reach,” Pearson explains.

With scalable cloud-native development, workloads can easily extend to where intelligence is needed from edge to cloud.

5. The IT/OT Integration

The very nature of AI’s utility—an integration between IT and OT—presents another challenge. Developers need to figure out how operational insights at the edge can be integrated into business operations to deliver efficiencies.

Developers also have to work backward from the KPI to be fine-tuned and then figure out the right combination of hardware and software that will do the job. Depending on the KPI, teams might need different performance and power choices. “Developers have to ask, ‘What’s the right hardware to run my application that’s going to give me the results I need,’” Pearson says.

Assuming AI developers can access the know-how and get software development going, they still need to test-drive software on a variety of different hardware units. This process is not easily achievable, nor is it the most time-efficient way to get the job done. The ongoing global chip shortage compounds the problem, making hardware that uses these chips difficult to source and buy.

Intel’s DevCloud solves one of the biggest challenges for AI developers. It eliminates decision paralysis by enabling developers to test their AI solutions on hundreds of edge hardware devices.

“Developers can very quickly understand how their application is going to perform using each of these pieces of hardware and they can find out what’s going to be right for their particular solution,” Pearson says.

The latest version of the toolkit, OpenVINO™ 2022.1 also helps in this space with its new hardware auto-discovery and automatic optimization designed to make hardware combination testing a breeze.

Usually, AI development is complex because the software has to be customized for each and every end use case. In addition, the edge hardware to be used increases the number of permutations and combinations that need to be tested. The OpenVINO toolkit eliminates those complexities, Pearson says. “There’s no ‘I have to run this differently because there’s an FPGA (field programmable gate array) involved’ or ‘To take advantage of one particular hardware feature, I may have to use some different code.’”

A developer-first approach shows up in the cross-architecture, write-once-use-anywhere toolkit. “You can easily optimize, tune, and run your inference application with our model optimizer,” Pearson says. Even better, a developer who doesn’t understand the difference between a GPU and a CPU can make this work.

6. Scaling AI Efforts

Once developers get started, what comes next? The path forward is not always clear.

Intel offers advanced developers with reference implementations from the Intel® Developer Catalog, which is a set of market-proven vertical implementations of software. A developer who is looking to implement a defect detection AI system or intelligent traffic management, for example, can use examples from the catalog. “You can see all the code, we’ll walk you through the implementation, and you can very quickly understand what’s going on there,” Pearson says.

AI development is not just about the software and the hardware, it is also about the environment in which it is deployed. An additional tool, Intel® Smart Edge Open, helps developers understand how to make AI applications part of an infrastructure that can be deployed in an environment. “It’s important for developers to test the AI application they are building, in the context of a brownfield or other environment,” Pearson says.

Just a few years ago, the thought of developers being able to access and make sense of data at the edge seemed like a pipe dream. But all that is changing. “The role of the [AI] developer is becoming more important than ever,” Pearson says. “We’ve got to make sure that they’re equipped to deal with this new environment through tools, products, and the information that’s going to help them build their solutions at scale.”

Preparing for the Future of AI Development

This is just the beginning of an era. As compute power and AI adoption increases, the use cases are going to expand to places that we never even imagined would be possible, Pearson explains, adding, “Developers will be right at the center of that, pushing the envelope with technology and creating some very interesting solutions that will probably blow all our minds.”

For developers, understanding the AI problems they’re trying to solve and equipping themselves with the skills to solve them is key to success.

“The future—today and tomorrow—is about open architectures and open ecosystems with much more flexibility, interoperability, and scalability that developers are going to need,” Pearson says. “AI is an opportunity for developers to be able to go and embrace a new world and do some exciting new things.”

AI is going to be the way of the future. And developers of all stripes can be a part of this exciting revolution by upskilling and using tools that streamline their workflows and unleash creativity.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Mobile Service Robots Built on Performance and Efficiency

In the classic TV series The Jetsons, the title characters employ a robotic housekeeper called “Rosey” who also serves as an impromptu home security system and companion. While The Jetsons takes place in a future that’s still four decades from now, forward-thinking technologies like Rosey are already becoming a reality.

A great example is RoomieBot, a mobile service robot that uses AI at the edge, machine vision, and natural language processing (NLP) to autonomously navigate and interact with humans in healthcare, retail, and hospitality settings. And it’s not a stretch to imagine these robotic AIs puttering around as home assistants in the next few years.

The future of ubiquitous mobile service robots requires modular hardware building blocks that blend performance, efficiency, and advanced software support.

The Anatomy of Modern Mobile Service Robots

Determining the best way to deliver these features begins with understanding the current state-of-the-art technology—both its advantages and limitations.

RoomieBot is designed around Intel® RealSense cameras, Intel® Movidius VPUs, and Intel® NUC platforms. This hardware suite provides a great foundation for early-stage mobile service robots with the vision and compute functionality required for:

  • Simultaneous Localization and Mapping (SLAM) to navigate autonomously
  • Visual detection algorithms to recognize people and objects
  • NLP for the voice user interface
  • Functions that control embedded motors and actuators

But as organizations look to scale the production of these systems for mass market deployment, there are opportunities to upgrade the stack for improved performance per watt and streamlined integration.

Most notably, these can be achieved by adopting 12th generation Intel® Core processors, formerly known as “Alder Lake”.

The future of ubiquitous mobile service #robots requires modular #hardware building blocks that blend performance, efficiency, and advanced software support. @Advantech_USA via @insightdottech

High-Performance Processors Don’t Have to Break the (Power) Bank

These latest Intel Core processors deliver a significant performance improvement over 8th generation Intel NUCs.

The performance gains are the result of eight additional cores (12 total) on the new processors. But these aren’t just any cores. 12th gen Intel Core processors are the first to introduce a hybrid core architecture consisting of traditional Performance CPUs and a new class of Efficient cores. The Efficient cores are optimized for less computationally intensive workloads like system management and control tasks.

All that added horsepower comes at a minimal power tradeoff, as the Intel® Core i7-12700TE processor features a base TDP of just 35W compared to the 28W TDP of the 8th generation mobile processor examined previously. For mobile service robots, this facilitates the execution of sophisticated edge AI stacks without instantly draining onboard batteries.

Smarter Integration Out of the Box

The ability to seamlessly integrate 12th gen Intel Core processors into a variety of different mobile service robot architectures is another crucial consideration for mass production and deployment.

For example, the MIO-4370 from Advantech, a leader in embedded and automation solutions, supports 35W 12th gen Intel Core Desktop processors with up to 16 hybrid cores and 24 execution threads. Designed to the 4” EPIC size 165 x 115mm (4.53” x 6.5”), the small form factor single-board computer provides OEMs and system integrators a rugged edge intelligence module with all the I/O needed by modern mobile service robots, such as:

  • A variety of high-bandwidth I/O and serial ports that facilitate the integration of vision inputs, perception sensor suites, control signaling, programming, and debug
  • Support for 3x simultaneous interactive displays at up to 5K resolution
  • Networking and expansion that includes two 2.5 GbE interfaces with Time-Sensitive Networking (TSN) and Intel vPro® support
  • 3 M.2 expansion sockets including 2 M.2 2280 PCIe 4.0 and PCIe 5.0, supports use of high-speed NVMe storage along with video transcoding, capture, or xPU acceleration cards
  • Additional components like a smart fan, discrete TPM 2.0 for security, and audio subsystem for voice communication

Because IoT edge use cases like mobile service robots consist of so many disparate applications and functions, the Advantech SBC has been pre-certified to work with Canonical’s distribution of Ubuntu Linux that enables containerized application development. Each container comes with its own system image, so mobile service robot programs can be coded free of dependencies or worries about other system requirements. This reduces development time and complexity, but also potentially speeds compliance efforts since changes to each container can often be certified individually once the whole system is approved.

Integration is further simplified by tools like the Advantech iManager 3.0 that offers APIs for controlling I/O from the user OS. Advantech’s Edge AI Suite and WISE-DeviceOn goes even further, providing a user-friendly SDK based on the Intel® OpenVINO Toolkit that lets engineers optimize and deploy deep-learning models to targets like 12th Gen Intel Core processors.

Mobile Service Robots: Out of the Factory and into the Family

In all, platforms like the MIO-4370 are more than just intelligent robotic controllers. They are building blocks for advanced mobile service robots that are higher performance, lower power, faster to develop, and more cost-effective than ever before.

Put simply, these integrated solutions are a precursor to scaling advanced mobile service robots for mass production. And subsequently, a future where having your own Rosey isn’t just reserved for a select few.

Thanks to highly integrated development environments, that future is closer than you might think.

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

IoMT Automates Vital Signs Technology

No one likes to schedule a medical appointment only to find an endless wait at a crowded doctor’s office or clinic. But with a critical lack of healthcare workers, those waits aren’t getting any shorter. The good news is IoMT (Internet of Medical Things) technology helps take the pressure off overburdened staff. AI-enabled self-service kiosks can deliver a better patient experience—both in and out of the clinical setting.

Shortage of medical professionals may be new in some parts of the world, but it’s a familiar problem in other markets.

“Asian hospitals have faced staffing issues for a long time,” says Jason Miao, Overseas Business Sales Director for Imedtac Co., LTD, a provider of IoMT technology solutions. “Here in Taiwan, it’s not uncommon for a doctor in a public hospital to see 100 patients in a three-hour shift.”

When you practice medicine at that kind of scale, one important truth becomes apparent: Anything that optimizes workflows in hospitals and clinics is a win.

As Imedtac’s Business Development Manager Beren Hsieh puts it: “It might not seem like a dramatic change, but if you can use technology to improve a process by a few minutes per patient, it has a huge impact on wait times and provider availability.”

“If you can use #technology to improve a process by a few minutes per #patient, it has a huge impact on wait times and provider availability.” – Beren Hsieh, Imedtac Co., LTD via @insightdottech

AI in Healthcare Makes It Easier to Measure Vital Signs

Case in point: Imedtac’s Smart Vital Signs Station, an IoMT alternative to the traditional vital signs measurement workflow.

The vital signs monitoring system is a self-service kiosk that measures a patient’s height, weight, temperature, heart rate, and blood pressure. If desired, it can be configured to record additional vital signs, such as blood oxygen levels.

The traditional method of measuring a patient’s vitals requires a trained individual to take readings, with varying devices, and manually record the results. On the other hand, Imedtac kiosk is a one-stop, self-service, automated solution that can save valuable time and resources.

Patients begin by identifying themselves to the system, which is connected to the hospital’s health information system. The station then measures other relevant health data their height, weight, temperature, and so on, providing guidance as needed through a simple-to-use interface. It automatically uploads the results to the cloud so that the patient data can be securely integrated with the patient’s electronic medical record and personal health record.

The entire process takes just a few minutes. Crucially, healthcare providers don’t need to be involved at all—freeing them up to perform other duties, and preventing errors caused by the manual transcription of vitals data (Video 1).

Video 1. IoMT powers vital signs technology workflow. (Source: Imedtac)

IoMT Devices Support Flexibility and Stability in Rural Thailand

Solutions like Imedtac’s need to operate in a wide variety of settings, from hospitals, clinics, and healthcare organizations to neighborhood pharmacies, gyms, and even grocery stores. Understandably, there isn’t always a lot of support or oversight available. For this reason, they are designed for flexibility, stability, and ease-of-use. The company’s experience in northern Thailand is a good example.

Imedtac partnered with Overbrook Hospital in Chiang Rai, a small city in a rural part of the country that serves as a medical hub for the surrounding communities. It was a challenging deployment. Overbrook is a busy hospital, where doctors and nurses are stretched thin, and IT resources are not as readily available as in larger urban centers. The hospital’s patient base presented an additional issue because it included many elderly patients as well as people unaccustomed to using technology in their daily lives.

Imedtac worked with Overbrook’s administrators to develop an optimized patient intake workflow—one tailored to the hospital’s needs. To cope with limited English proficiency in the region, they added a Thai language user interface. And to simplify the authentication process, they included support for Thai national ID cards. Imedtac’s developers then integrated the kiosks with the hospital’s legacy IT systems.

The results were better than expected. As it turned out, the majority of patients caught on to the vital signs stations very quickly, and were able to use them without difficulty. The hospital’s nurses, who no longer had to measure each patient individually, were free to assist those who required extra help.

The deployment in Overbrook has also proven to be very reliable—important in any hospital setting, of course, but especially in one where IT resources are limited. Here Miao credits Intel® processors: “These vital signs kiosks run 365 days a year, pretty much nonstop, so they have to be built on something dependable. Intel provides an extremely stable and powerful platform for IoMT applications.”

The Future of Patient Care

IoMT technology is already providing some much-needed relief to healthcare professionals. Going forward, it may be able to directly improve health outcomes for patients as well. “Near term, we’re already starting to see solutions like smart wards, which use edge AI and real-time analytics to optimize inpatient workflows and improve medication safety,” says Hsieh.

Further ahead, healthcare administrators and systems integrators will turn to edge analytics and AI to enhance remote patient monitoring, critical care, and surgical medicine.

“In the future, this technology will be used to integrate data streams for ICU and OR staffs, giving them the information they need when they need it,” says Maio. “And doctors will rely on AI to help them make better decisions about patient care.”

Many challenges remain for the healthcare industry, both now and in the coming years. But thanks to advances in IoMT technology, the prognosis is improving.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

This article was originally published on May 19, 2022.

Deploy AI Apps with Intel® OpenVINO™ and Red Hat

Listen on:

Apple Podcasts      Spotify      Google Podcasts      Amazon Music
What can artificial intelligence do for your business? Well, for starters, it can transform it into a smart, intelligent, efficient, and constantly improving machine. The real question is: how? There are multiple ways organizations can improve their operations and bottom line by deploying AI apps. But it’s not always straightforward and requires skills and knowledge they often do not have.

Thankfully, companies like Red Hat and Intel® have worked hard to simplify AI development and make it more accessible to enterprises and developers.

In this podcast, we discuss: the growing importance of AI, what the journey of an AI application looks like—from development to deployment and beyond—and the technology partners and tools that make it all possible.

Our Guests: Red Hat and Intel®

Our guests this episode are Audrey Reznik, Senior Principal Software Engineer for the enterprise open-source software solution provider Red Hat, and Ryan Loney, Product Manager for OpenVINO™ Developer Tools at Intel.

Audrey is an experienced data scientist who has been in the software industry for almost 30 years. At Red Hat, she works on the OpenShift platform and focuses on helping companies deploy data science solutions in a hybrid cloud world.

Ryan has been at Intel for more than five years, where he works on open-source software and tools for deploying deep-learning inference.

Podcast Topics

Audrey and Ryan answer our questions about:

  • (2:24) The business benefits of AI and ML
  • (5:01) AI and ML use cases and adoption
  • (8:52) Challenges in deploying AI applications
  • (13:05) The recent release of OpenVINO 2022.1
  • (22:35) The AI app journey from development to deployment
  • (36:38) How to get started on your AI journey
  • (40:21) How OpenVINO can boost your AI efforts

Related Content

To learn more about AI and the latest OpenVINO release, read AI Developers Innovate with Intel® OpenVINO™ 2022.1. Keep up with the latest innovations from Intel and Red Hat, by following them on Twitter at @Inteliot and @RedHat, and on LinkedIn at Intel-Internet-of-Things and Red-Hat.

 

This podcast was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Transcript

Christina Cardoza: Hello and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Associate Editorial Director of insight.tech. And today we’re talking about deploying AI apps with experts Audrey Reznik from Red Hat and Ryan Loney from Intel®.

Welcome to the show guys.

Audrey Reznik: Thank you. It’s great to be here.

Ryan Loney: Thanks.

Christina Cardoza: So, before we get started, why don’t you both tell us a little bit about yourself and your background at your company. So Audrey I’ll start with you.

Audrey Reznik Oh, for sure. So I am a Senior Principal Software Engineer. I act in that capacity as the data scientist. I’ve been with Red Hat for close to a year and a half. Before that, I’m going to be dating myself here, I’ve spent close to 30 years in the software industry. So I’ve done front-end to back-end development. And just the last six years I’ve concentrated on data science. And one of the things that I work on with my team at Red Hat is the Red Hat OpenShift data science platform.

Christina Cardoza: Great. And Ryan?

Ryan Loney: Yep. Hi, I’m Ryan Loney. So I’m a Product Manager at Intel® for OpenVINO™toolkit. And I’ve been in this role since back in 2019 and been working in the space for—not as long as Audrey—so, less than a decade. But the OpenVINO toolkit: we are in open-source software and tools for deploying deep learning inference. So things like image, classification, object detection, natural language processing—we optimize those workloads to run efficiently on Intel® hardware whether it’s at the edge or in the cloud, or at the edge and controlled by the cloud. And that’s what we do with OpenVINO.

Christina Cardoza: Great. Thanks, Ryan. And I should mention the IoT Chat and insight.tech program as a whole are published by Intel®, so it’s great to have someone with your background and knowledge joining us today. Here at insight.tech, we have seen AI adoption just rapidly increasing and disrupting almost every industry—if not every industry.

So Ryan, I would love to hear from your perspective why AI and machine learning is becoming such a vital tool, and what are the benefits businesses are looking to get out of it?

Ryan Loney: Yeah, so I think automation in general—everything today has some intelligence embedded into it. I mean, the customers that we’re working with, they’re also taking general purpose compute, you know like an Intel® Core™ processor, and then embedding it into an X-ray machine or an ATM machine, or using it for anomaly detection on a factory floor.

And AI is being sort of integrated into every industry, whether it’s industrial, healthcare, agriculture, retail—they’re all starting to leverage the software and the algorithms for improving efficiency, improving diagnosis in healthcare. And that’s something that is just—we’re at the very beginning of this era of using automation and intelligence in applications.

And so we’re seeing a lot of companies and partners of Intel® who are starting to leverage this to really assist humans in doing their jobs, right? So if we have a technician who’s analyzing an X-ray scan or an ultrasound, that’s something where with AI we can help improve the accuracy and early detection for things like pneumothorax.

And with factories, we have partners who are building batteries and PCBs, and they’re able to use cameras to just detect if there’s something wrong, flag it, and have somebody review it. And that’s starting to happen everywhere. And with speech and NLP, this is a new area for OpenVINO, where we’ve started to optimize these workloads for speech synthesis, natural language processing.

So if you think about, you know, going to an ATM machine and having it read your bank balance back to you out loud, that’s something that today is starting to leverage AI. And so it’s really being embedded in everything that we do.

Christina Cardoza: Now, you mentioned a couple of use cases across some of the industries that we’ve been seeing, but Audrey, since you have been in this space for—as you mentioned, a couple of decades now—I would love to hear from your perspective how you’re seeing AI and ML being deployed across these various use cases. And what the recent uptake in adoption has been.

Audrey Reznik Okay, so that’s really an excellent question. First of all, when we’re looking at how AI and ML can be deployed across the industry, we kind of have to look at two scenarios.

Sometimes there’s a lot of data gravity involved where data cannot be moved off prem into the cloud. So we still see a lot of AI/ML deployed on premises. And, really, on premises there are a number of platforms that folks can use. They can create their own, but typically people are looking to a platform that will have MLOps capability.

So that means they’re looking for something that’s going to help them with the data engineering, the model development, training/testing the deployment, and then the monitoring of the model and the intelligent application that communicates with the model. Now that’s being on prem.

What people also do, they’ve taken advantage of the public cloud infrastructure that we have. So a lot of folks are also moving, if they don’t have data-gravity issues or security issues, because we do see—such as defense systems or even government—they prefer to have their data on prem. If there are no issues with that, they tend to move a lot of their MLOps creation and delivery/deployment to the cloud. So, again, they’re going to be looking for a cloud service platform that, again, is going to have MLOps available there so that they could go ahead and look at their data, curate their data. Then be able to go ahead and create models, train, test them, deploy them, and again, be able to have that capability once things are deployed to go ahead and monitor those models. Again, check for drift. If there are any issues with the models, be able to retrain those models. In both instances, what people are really looking for is something easy to use. You don’t want to put together a number of applications and services piecemeal. I mean, it can be done, but at the end of the day we’re looking for something ease of use. We really want a platform that’s easy to use for data scientists, data engineers, application developers, so that they can collaborate. And the collaboration then kind of drives some of the innovation and their ability, again, to deploy an intelligent application quickly.

And then, I should mention for everybody in IT, whether you’re on prem or in the cloud, IT has to be happy with your decision. So they have to be assured that the place that you’re working in is secure, that there’s some sort of AI governance driving your entire process. So those are on prem in the cloud, kind of the way that we’re seeing people go ahead and deploy AI/ML, and increasingly we’re seeing people use both.

So we’re having what we call a hybrid cloud situation, or hybrid platforms.

Christina Cardoza: I love all the capabilities you mentioned that people are looking for in the tools that they pick up, because AI can be such an intimidating field to get into. And, you know, it’s not as simple as just deploying an AI application or solution. There’s a lot of complexity that goes into it. And if you don’t choose the right tool or if you’re piecemealing it, like you mentioned, it can make things a lot more difficult than they need to be. So with that, Ryan, what are some of the challenges, the biggest challenges that businesses face when they’re looking to go on this AI journey and deploy AI applications in their industry and in their business?

Ryan Loney: I think Audrey brought up one of the biggest challenges, and that’s access to data. So, I mean, it’s really important. I think we should talk about it more, because when you’re thinking about training or creating a model for an intelligent application, you need a lot of data. And when you factor in HIPAA compliance and privacy laws and all of these other regulatory limitations, and of course, ethical choices that companies are making—they want to protect their customers’ privacy and they want to protect their customers. So having a secure enclave where you can get the data, train the data, you can’t necessarily send it to a public cloud, or if you do, you need to do it in a way that’s secure. And that’s something that Red Hat is offering. And that’s one of the things I’m really impressed with from Red Hat and from OpenShift, is this approach to hybrid cloud where you can have on prem, managed OpenShift. You can have—run it in a public cloud and really give the customer the ability to keep their data where they’re legally allowed to, or where they want to keep it for security and privacy concerns. And so that’s really important.

And when it comes to building these applications, training these models for deep learning, for AI, everything is really at the foundation built on top of open source tools. So we have deep learning frameworks like TensorFlow and PyTorch. We have toolkits that are provided by hardware vendors like Intel®. We have OpenVINO, OpenVINO toolkit, and there’s this need to use those tools in an environment that is safe for enterprise that has access rights and management. But at the core they’re open-source tools, and that’s what’s really impressive about what Red Hat is doing. They’re not trying to recreate something that already exists and works really well. They’re taking and adopting these open source tools, the open source Open Data Hub, and building on top of that and offering it to enterprises.

So they’re not reinventing the wheel. And I think that’s one of the challenges for many businesses that are trying to scale is they need to have this infrastructure, and they need to have a way to have auto-scaling, load-balancing infrastructure that can increase exponentially on demand when it needs to. And building out a Kubernetes environment yourself and setting it all up and maintaining that infrastructure—that’s overhead and requires DevOps engineers and IT teams. And so some of that’s really where I think Red Hat is coming into, in a really important space, to offer this managed service so that you can focus on getting the developers and the data scientists access to the tools that they would use on their own outside of the enterprise environment, and making it just as easy to use in the enterprise environment. And giving them the tools that they want, right? So they want to use the tools that are open source, that are the deep learning frameworks, and not reinventing the wheel. So I think that’s really a place where Red Hat is adding value. And I think there’s going to be a lot of growth in this space, because our customers that are deploying at scale and including devices at the edge, they’re using container orchestration, right? These orchestration platforms, you need it to manage your resources, and having a control plane in the cloud and then having nodes at the edge that you’re managing—that’s the direction that a lot of our customers are moving. And I think that’s the future.

Christina Cardoza: Great. And while we’re on the topic of tools, you’ve mentioned OpenVINO a couple of times, which is Intel®’s AI toolkit. And I know you guys recently just had one of the biggest launches since OpenVINO was first started. So can you talk a little bit about some of the changes or thought process that went into the OpenVINO 2022.1 release? And what new capabilities you guys added to really help businesses and developers take advantage of all the AI capabilities and opportunities out there.

Ryan Loney: Yeah. So this was definitely the most substantial change of feature enhancements, improvements that we’ve made in OpenVINO since we started in 2018.

It’s really driven by customer needs. And so some of the key things for OpenVINO are that we have hardware plugins, so we call them device plugins to our CPU or GPU and other accelerators that Intel® provides. And Intel®, we’ve recently launched our discrete graphics. We’ve had integrated graphics for a long time, so, GPUs that you can use for doing deep learning inference, that you can run AI workloads on these GPUs. And so some of the features that are really important to our customers that are starting to explore using these new graphics cards—which we’ve launched some of the client discrete graphics and laptops, and later this year we’re going to be releasing the data center server, edge server skews for discrete graphics—the customers need to do things like automatic batching. So when you have a GPU card deciding the correct batch size for the input for a specific model, it’s going to be a different number depending on the model and depending on the compute resources available.

So some of our GPUs have different numbers of execution units and different power ratings. So there’s different factors that would make each GPU card slightly different. And so instead of asking the developer to go out and try batch size 32 and batch size 16 and batch size 8 and try to find out what works best for their model, we’re automating some of that so that they don’t have to, and they can just automatically let OpenVINO determine the batch size for them.

And on a similar note, since we’ve started to expand to natural language processing, if you think about question answering, so if you had asked a chat bot a question like, what is my bank balance? And then you ask it a second question like, how do I open an account? Both of those questions have different sizes, right? The number of letters and number of words in the sentence—it’s a different input size. And so we have a new feature called dynamic shapes, and that’s something we introduced on our CPU plugin. So if you have a model like a BERT natural language processing model, and you have different questions coming into that model of different sizes, of different sequences, OpenVINO can handle under the hood, automatically adjusting the input. And so that’s something that’s really useful, because without that feature you have to add padding to every question to make it a fixed sequence link, and that adds overhead and it wastes resources. So that’s one feature that we’ve added to OpenVINO.

And just one additional thing I’ll mention is OpenVINO is implemented in C++ at its core. So our runtime, we have it written in C++. We have Python bindings for Python API. We have a model server for serving the models in environments like OpenShift where you want to expose a network endpoint, but that core C++ API, we’ve worked really hard to simplify it in this release so that if you take a look at our Python code, it’s really easy to read Python. And that’s why a lot of developers, data scientists, the AI community really like Python because the human readability is much better than C++ for many cases. So we’ve tried to simplify the C++ API, make it as similar as possible to Python so that developers who are moving from Python to C++—it’s very similar. It’s very easy to get that additional boost by using C++.

So those are some of the things that we changed in the 2022.1. There are several more, like adding new models, support for new operations, really expanding the number of models that we can run on Intel® GPU. And so it’s a really big release for us.

Christina Cardoza: Yeah. It sounds like a lot of work went into making AI more accessible and easier entry for developers and these businesses to start utilizing everything that it offers. And Audrey, I know when deploying intelligent applications with OpenShift, you guys also offer support with OpenVINO. So I would love to hear what your experience has been using OpenVINO and how you’re gaining more benefits from the new release. What were some of the challenges you faced before OpenVINO 2022.1 came out, and what are you guys experiencing now on the platform?

Audrey Reznik: Right. So, again, very good question. And I’m just going to lead off from where Ryan left on expanding on the virtues of OpenVINO.

First of all, you have to realize that before OpenVINO came along, a lot of the processing would have been done on hardware. So clients would have used GPU, which can be expensive. And a lot of the times when somebody is using a GPU, not all of the resources are used. And that’s just kind of a, I don’t want to say waste, but it is a waste of resources that you could probably use those resources for something else, or even have different people using that same GPU.

With the advent of OpenVINO that kind of changed the paradigm in terms of, how I can go and optimize my model or how I can do quantization.

So let’s go ahead with optimization first. Why use a GPU if you can go ahead and, say, process some video and look at that video and say, you know what? I don’t need all the different frames within this video to get an idea of what my model may be looking at. Maybe my model may be looking at a pipe in the field and we’re just, from that edge device, we’re just checking to make sure that that nothing is wrong with that pipe. It’s not broken. It’s not cracked. It’s in good shape. You don’t need to use all of those frames that you’re taking within an hour. So why not just reduce some of those frames without impacting the ability of your model to perform. That optimization feature was huge.

Besides that, with OpenVINO, as Ryan alluded to, you can just go ahead and add just a couple little snippets of code to get this benefit. That’s not having to go through the trouble of setting up a GPU. So that’s like a very quick and easy way to optimize something so that you can take the benefit of OpenVINO and not use the hardware.

The other thing is quantization. Within machine learning models, you may use a lot of numerics in your calculations. So I’m going to take the most famous number that most people know about, which is pi. It’s not really 3.14; it’s 3.14 and six or seven digits beyond that. Well, what if you don’t need that precision all the way? What if you can just be happy with just the one value that most people equate with pi, which is 3.14. In that respect, you’re also gaining a lot of benefit for your model in terms of you’re still getting the same results, but you don’t have to worry about cranking out so many digit points as you go along. And, again, for customers this is huge because, again, we’re just adding just a couple lines of code in order to use the optimization and quantization with OpenVINO. That’s so much easier than having to hook up to a GPU. I’m not saying—nothing bad about GPUs, but for some customers it’s easier. And, again, for some customers it’s also cheaper. And some people really do need to save some of that money in order to be more efficient with the funds that they could divert elsewhere in their business. So, if they don’t have to get a GPU, it’s a nice, easy way to kind of save on that hardware expense, but really get the same benefits.

Christina Cardoza: Now we’ve talked a lot about the tools and the capabilities that we can leverage in this AI-deployment journey. But I would love to give our listeners a full picture of what an AI journey really entails from end-to-end, start-to-finish. So Audrey, would you be able to walk us through that journey a little bit from development to deployment, and even beyond deployment?

Audrey Reznik: Yeah, I can definitely do that. And what I will do is I will share some slides. For those that are just listening through their radio, I’ll make sure that my description is good enough for you so that you won’t get lost. So what I’m going to be sharing with you is the Red Hat OpenShift data science platform. This is a cloud service that is available on AWS. And of course this can have hybrid components, but I’m just going to focus on the cloud services aspect. And this is a managed service offering that we have to our customers. And we’re mainly targeting our data scientists, data engineers, machine learning engineers, and of course our IT folks so that they don’t have to manage their infrastructure. So, what we want to look at in the journey, especially for MLOps, is there are a couple of things that are very important or steps.

We want to gather and prepare the data. We want to go ahead and develop the model. We want to integrate the models in application development. We wanted to do model monitoring and management. And we have to have some way of going ahead and retraining these models. These are  four or five very important steps. And at Red Hat, and again as Ryan talked about earlier, we don’t want to reinvent everything. We want to be able to use some of the services and applications that companies have already created. And a lot of open source companies have created some really fantastic applications and pieces of software that will fit each step of this MLOps journey or model life cycle.

So before I go into taking a look at all the different steps of the model I circle, I’m just going to build up this infrastructure for you to take a look at. So really this managed cloud services platform, first of all, sits on AWS, and within AWS Red Hat OpenShift has two offerings: We have Red Hat OpenShift Dedicated, or some may be familiar with Red Hat OpenShift service on Amazon Web Services, which we affectionately call Rosa.

Now, even though we have these platforms, we want to take care of any hardware acceleration that we may want. So we want to be able to include GPUs, and we have a partner with Nvidia where we use Nvidia GPUs for hardware acceleration. We also have Intel®. Intel® not only helps with that hardware aspect, but, again, we’ll point out where OpenVINO comes in a little bit later.

Over top of this basic infrastructure, we have what we call our Red Hat managed cloud services. These are going to help to take any machine learning model that’s being built all the way, again, from gathering and preparing data—where you could use something such as our streaming services for time series data—to developing a model where we have the OpenShift data service application or platform, and then to be able to deploy that model using source-to-image, and then model monitoring and management with Red Hat OpenShift API management.

Again, as I mentioned, we didn’t want to go ahead and create everything from scratch. So what we did is for each part of the model life cycle we invited various independent service vendors to come in and join this platform. So if you wanted to gather/prepare data, you could use Starburst Galaxy. Or if you didn’t want to use that, you could go back to the Red Hat offering. If you wanted to develop the model, you could use Red Hat OpenShift data science, or you could use Anaconda, which comes with prebaked models and an environment where you can go ahead and develop and train your model and so forth.

But what we also did was add in a number of customer-managed software. And this is where OpenVINO comes in. So what we have with this independent software is, again, we can go ahead and develop our model, but this time we may use Intel®’s oneAPI AI analytics toolkit. And if we wanted to, again, integrate the models in app development, we may go ahead and use something like OpenVINO, as well as we could also use something like IBM Watson.

The idea though is at the end of the day, we go ahead and we invite all these open source products into our platform so that people have choice. And what’s really important about the choice is they can pick which solution works better for them to solve the particular problem that they’re working on.

And, again, with that choice, they may see something that they haven’t used before that may actually help them innovate better, or actually make their product a lot better.

So by having this type of platform where you can go ahead and do everything that you need to ingest your data, develop, and train, and deploy your model, to bring your application engineers in to create the front-end and the REST API services for an intelligent application. And then being able to go ahead and deploy your model, and then being able to retrain it when you need it is something that makes the whole process of the MLOps a lot easier. This way you have everything, and within one consecutive platform you’re not going ahead and trying to fit things together and, I think I mentioned before, piecemeal solutions together. And at the end of the day you do have a product then that everyone on your team can use to collaborate and push something out into production a lot easier than they may have been able to do in the past.

Christina Cardoza: That’s great. Looking at this entire AI journey and the life cycle of an AI intelligent application, Ryan, I’m wondering if you can talk a little bit more about how OpenVINO works with OpenShift, and where in this journey does it come in?

Ryan Loney: Yeah. So if I could go ahead, and I’ll share my screen now and just show you what it looks like. So, if we take a look at—and for those who can’t see the screen, I’ll try my best to describe—so I’m logged into OpenShift console and this is an OpenShift cluster that’s hosted on AWS. And you can see that I’ve got the OpenVINO toolkit operator installed. And so OpenShift provides this great operator framework for us to just directly integrate OpenVINO and make it accessible through this graphical interface.

So I’ll start maybe from the deployment part at the end here, and work backwards. But Audrey mentioned deploying the models and integrating with applications. So once I have this OpenVINO operator installed, I can just create what’s called a model server. And so this is going to take my model or models that my data scientists have trained and optimized with OpenVINO and give an API endpoint that you can connect to from your applications in OpenShift.

So, again, the great thing about this is the ability to just have a graphical interface, so when I create a new instance of this model server, I can just type in the box and give it a name to describe what it’s doing. So maybe this is a product classification for retail. So maybe I’d say product classifier, and give it a name. And then it’s going to pull the image that we publish to Red Hat’s registry that has all of our dependencies for OpenVINO to run the with the Intel® software libraries baked into the image. And then if I want to do some customization, like a change where I’m pulling my model from, or do a multimodel deployment versus single, I can do that through this drop-down menu.

And the way that this deployment works is we use what’s called a model repository. So, once the data scientists and the developer have the model ready to deploy, they can just drop it into a storage bucket, into a persistent volume in OpenShift, or any pretty much any S3 compatible storage or Google Cloud storage bucket—you can just create this repository. And then every time an instance or a pod is created, it can quickly pull the model down so you can scale this up. And so basically once I click “create,” that will immediately create an instance that’s serving my model that I can scale up with something like a service mesh, using the service mesh operator and put this into production.

I’ll go backwards now. So we talked a little bit about optimizations. We also have a Jupiter notebook integration, so if you want to have some ready-to-run tutorials that show, how do I quantize my model? How do I optimize it with OpenVINO? You can do that directly in the Red Hat OpenShift data science environment, which is another operator that’s available through Red Hat. It’s a managed service, and I’ve actually already set this up and this is sort of what it looks like. And I’ll just show you the Jupiter interface. So if I wanted to learn how to quantize a model, which Audrey described, reducing the precision from FP32 to integer 8, there’s some tutorials that I can run. And I will just show that the output of this Jupiter notebook. It does some quantization of where training—it takes a few minutes to run. And you can see that the throughput goes from about 1,000 frames per second to 2,200 frames per second without any significant accuracy loss. So very minimal change in accuracy, and that’s one way to compress the model, boost the performance, and there’s several tutorials that show how to use OpenVINO and generate these models. And then once you have them, you can deploy them directly, like I was showing, through the OpenShift console and create an instance to serve those models in production. That’s what’s really great about this, is if you want to just open up a notebook, we give you the tutorials to teach you how to use the tools and at a high level. OpenVINO, when we talk about optimization, we’re talking about reducing binary size, reducing memory footprint, and reducing resource consumption. So if we want, this OpenVINO was originally focused on just the IoT space on the edge. But we’ve noticed that people care about resource consumption in the cloud just as much if not more, when you think about how much they’re spending on their cloud bill. Well, if I can go and apply some code to optimize my model, and go from processing 1,000 frames per second, which if you think about processing video, like Audrey said, 30 FPS or 15 FPS is standard video. This is going from 1,000 frames to 2,200. Being able to get this sort of for free, right? You don’t have to spend more money on expensive hardware. You don’t have to—you can process more frames per second, more video streams at the same time, and you can unlock this by using our tools.

OpenVINO also—even if you don’t want to quantize the model because you want to make sure you maintain the accuracy—you can also use our tools to change from FP32 to FP16, which is floating point 32 to floating point 16, that reduces the model size and the memory consumption, but it doesn’t impact the accuracy. And even if you just perform that step or you don’t perform quantization, we are doing some things under the hood, like operation fusion, convolutions fusion—these are all things that give you performance boost, reduce the latency, increase the throughput, but they don’t impact accuracy. And so those were some of the reasons why our customers are using OpenVINO, to squeeze out a little bit more performance and also reduce the resource consumption compared to if you just tried to deploy with the deep learning.

Christina Cardoza: Great. Listening to you guys and seeing the tools in action and the entire life cycle, it’s very clear that there is a lot that goes into deploying an AI application. And luckily, the work that Intel® and Red Hat have been doing has sort of eased the burden for businesses and developers. But, I can imagine if you’re just getting started, you’re probably trying to wrap your head around all of this and understand how you approach AI in your company, how you start an AI effort. So Audrey, I’m wondering, where is the best place to get started? How do you be successful on this AI journey?

Audrey Reznik: It’s funny that you should mention that. One of my colleagues wrote an article that the best data science environment to work on isn’t your laptop. And he was alluding to the fact that when you first start out, going ahead and creating some sort of model that will fit in intelligent app, usually what data scientists will do is they’ll put everything on their laptop. Well, why did they do that? Well, first of all, it’s very easy to access. They can load whatever they want to. They can be able to efficiently go in and know that their environment isn’t going to change because they’ve set it up, and they may have all their data connections added. That’s really wonderful for maybe development, but they’re not looking towards the future where, how do I scale something that’s on my laptop? How do I share that something that’s on my laptop? How do I upgrade? This is where you want to move to some sort of platform, whether it’s on prem or in the cloud, that’s going to allow you the ability to kind of duplicate your laptop. Okay, so Ryan was able to show that he had an OpenVINO image that had the OpenVINO libraries that were needed. It’s within Python, so it had the appropriate Python libraries and packages. He was able to create something—an ephemeral IDE that he was able to use. What he didn’t point out within that one environment was that he’d be able to use the GitHub repo very easily, so that he could check in his code and share his code.

When you have something that is a base environment that everybody’s using, it’s very easy then to take that environment and upgrade it, increase the memory, increase the CPU resources that are being used, add another managed service in. You have something that’s reproducible, and that’s key, because what you want to be able to do is take whatever you’ve created and then be able to go to deploy it successfully.

So if you’re going to start your AI journey, please go ahead and try to find a platform. I don’t care what kind it is. I know Red Hat and Intel® will kill me for saying that, but find a platform that will allow you to do some MLOps. So something that will be able to allow you to explore your data. Develop, train, and deploy your model. Be able to work with your application engineers where they could go ahead and write a UI or REST end points that could connect to your model, and something that will help you deploy your model where you can monitor, manage it for drift, or even to see if your model’s working exactly how it’s supposed to work. And then the ability to retrain. You want to be able to do all those steps very easily. I’m not going to get into GitOps pipelines and OpenShift pipelines at this point, but there has to be a way that, from the beginning to the deployment, it’s all done effortlessly and you’re not trying to use chewing gum and duct tape to put things together in order to deploy it to production.

Christina Cardoza: That’s a great point. And Ryan, I’m curious, once you get started on your AI journey, you have some proven projects behind you, how can you use OpenVINO, and how does Intel® by extension help you boost your efforts and continue down a path of a future with AI in your business and operations?

Ryan Loney: Yeah. So I’d say a good first step would be to go to openvino.ai. We have a lot of information about OpenVINO, how it’s being used by our ISVs, our partners, and customers. And then docs.openvino.ai and the “get started” section. We have a lot of information about, I know Audrey said not to do it on your laptop, but if you want to learn the OpenVINO piece, you can at least get started on your laptop and run some of our Jupiter notebooks, the same Jupiter notebooks that I was showing on the OpenShift environment. You can run those on your Red Hat Linux laptop, or your Windows laptop, and start learning about the tools and start learning about the OpenVINO piece.

But if you want to connect everything together, in the future we’re going to have—I believe we’ll have a sandbox environment that Red Hat will be providing where we can—you can log in and replicate what I was just showing on the screen.

But really to get started and learn, I would check out open vino.ai, check out docs.openvino.ai and get started. And you can start learning if you have an Intel® CPU and Linux, Windows, or Mac, and start learning about our tools.

Christina Cardoza: Great. Well, this has been a great conversation and I’m sure we could go on for another hour talking about this, but unfortunately we’re out of time. So I just want take the time and thank you both for joining us today and coming on the podcast.

Ryan Loney: Thank you.

Audrey Reznik: Yeah, thank you for having us.

Christina Cardoza: And thanks to our listeners for joining us today. If you enjoyed the podcast, please support us by subscribing and rating us on your favorite podcast app. This has been the IoT Chat. We’ll be back next time with more ideas from industry leaders at the forefront of IoT design.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Hyperconverged IT Infrastructure: A Game-Winning Play

Going to see a baseball game live is an experience. There is nothing like the smell of the hot dogs or having a food vendor toss you a bag of salted peanuts. Then there is the energy and the buzz of the crowd with every positive or negative play. Whether your team wins or loses, nothing beats the thrill of the game.

But that camaraderie and excitement is contingent on the stadium experience. Fans don’t want to waste time in long lines. They don’t want to have to track down those food vendors in the middle of the game. It is not just the team that keeps fans coming back. It’s the entire outing—from the ticket booths to the concessions stands to the digital scoreboards and even the merchandise vendors.

But this all depends on the ability to operate as smoothly and effectively as possible, which means innovating from stadium edge to data center.

It is easier said than done. Most baseball organizations still run outdated technology, have legacy equipment, and deal with multiple vendors. And they just don’t have the staff or resources to support a major transformation. The technology stack can become very complex very quickly.

For instance, one major professional baseball franchise was looking into how it could improve its stadium experience and operations on the fly. To do so, it had to find innovative ways to update its IT platforms. It found real-time data would be crucial to identify bottlenecks and new opportunities, but its infrastructure limited the ability to be agile and obtain that valuable information.

“At the end of the day, they want to make sure when people come into the stadium, they provide a top-notch user experience. And they can only do that by using real-time data and analytics to understand what the situation of the crowd is in the stadium,” says Rupesh Chakkingal, Product Management, Cloud and Compute, Cisco Systems, a networking, cloud, and cybersecurity solution provider.

IT Performance That Doesn’t Strike Out

The baseball franchise wanted to measure ingress and egress counts, wait times, and location-based engagement as it happened. For instance, it wanted to detect how many fans entered the stadium at a given time, how long it took to obtain their tickets, areas of congestion within the stadium, and at what points in the game fans started to leave.

Having access to this information would allow the organization to detect if gates became too crowded and immediately minimize wait times by deploying additional security or ticketing personnel. If concession stand lines become too long, the stadium might send out portable merchandise carts to reduce wait times.

A hyperconverged infrastructure combined with the #hybrid cloud provides the low latency, #network integration, and high-performance computing at the scale organizations are searching for. @Cisco via @insightdottech

Beyond the stadium, the organization wanted to provide play-by-plays as they happened to fans watching at home or on the go.

To achieve the cloud-like agility, performance, and security it needed, the baseball franchise worked with Cisco to deploy the hyperconverged infrastructure solution HyperFlex.

According to Chakkingal, Cisco HyperFlex enabled the IT team to break down storage, compute, and data management silos to a cluster of x86 servers. Traditionally, it would have to test certain new capabilities and wait to measure the impact. With Cisco HyperFlex, the team can analyze efforts in real time to make better-informed decisions.

All the stadium’s digital systems now run on Cisco HyperFlex with Intel® Xeon® Scalable processors and Intel® Optane technology. This allows the organization to achieve the best performance with fewer nodes and less storage. The IT team went from managing 30 legacy nodes individually to eight nodes collectively. It also was able to reduce the number of physical data racks from two to only a half of a rack, and went from 30 hypervisor hosts to only six. All managed through a simple, easy-to-use management interface.

“What used to take 12 to 18 hours to query now takes minutes,” says Chakkingal. “In fact, the results of the stadium’s queries started to come back so fast, the IT team thought something was awry with their system or queries weren’t working.”

A Home Run for Hybrid Cloud Infrastructure

The baseball franchise success story is just one example of the real business benefits Cisco HyperFlex offers. The technology is helping customers in a wide variety of markets such as financial, manufacturing, retail, healthcare, and public sector industries.

As big data, artificial intelligence, and cloud computing become core to digital business success, Chakkingal sees more and more organizations starting to redesign their current infrastructure around hybrid cloud. It provides the low latency, network integration, and high-performance computing at the scale organizations are searching for.

Chakkingal says a successful hybrid cloud strategy has three essential components:

  • Hyperconverged infrastructure to provide a cloud-like operating experience and integration with ecosystem partner services
  • Workload engine to deliver virtual machines and containers as a service with full stack observability
  • Workload management and optimization both on-prem and off-prem through a single pane of glass

“Our secret ingredient to help customers successfully adopt a hybrid cloud strategy goes well beyond Cisco HyperFlex,” he says. “We provide the entire lifecycle management capabilities for customers.”

Cisco understands customers typically work with multiple other technologies. To ensure interoperability, the company offers Validated Design Guides that provide a reference architecture on how to get started and ensure multiple third-party components work together.

Going forward, Cisco plans to offer customers even more choice and flexibility when it comes to adopting Cisco HyperFlex. The company already offers easy-to-deploy and easy-to-manage edge node configurations for quicker data response times. And it is working on a software-only version of HyperFlex on third-party vendor platforms that will enable additional edge use cases.

“There’s a growth of IoT and smart devices, which will demand a lot of processing closer to the endpoint,” says Chakkingal. “We’re providing our customers with a better user experience and faster access to the data they need now and into the future.”

 

This article was edited by Georganne Benesch, Associate Content Director for insight.tech.