Safeguarding Industry 4.0 with Next-Gen OT Security Solutions

Cybersecurity is a high priority for every business nowadays. But despite improvements in IT security, the operational technology (OT) used to monitor and control industrial processes is often dangerously unprotected. Over the past couple of years, the United States Cybersecurity and Infrastructure Security Agency (CISA) has issued multiple public warnings about various cyberattacks that put OT systems at risk, with rising ransomware threats to operational technology assets being top of mind as of recently.

As manufacturing digital transformation efforts accelerate, the problem will only worsen—making Industry 4.0 technology a tempting target for cybercriminals, hacktivists, and even the militaries and intelligence agencies of nation states. But next-generation industrial security appliances may offer a solution to the unique challenges of OT security.

IT/OT Convergence: Synergy or Cyber Risk?

Digital transformation initiatives have filled the modern factory with AI and IoT technology: a multitude of smart sensors that collect data from the manufacturing process in real time. The result is that historically “unintelligent” OT environments now generate a wealth of useful data—data that can be shared with IT networks for reporting, analysis, and process optimization.

This merging of IT and OT networks is known as the IT/OT convergence, and the business case behind it is clear, according to Calvin Ma, Product Manager at NEXCOM International, a manufacturer of network and industry 4.0 solutions. “Companies gain greater control over their manufacturing process. And customers can see inside the factory, giving them better insight into progress and quality,” he explains.

But in addition to the benefits, IT/OT convergence brings significant risks. After all, a smart factory is a factory that is connected to the internet—and this exposes OT networks to attacks. That’s a serious problem, since OT is notoriously hard to protect because of factors like legacy equipment that simply can’t run security software, as well as the questionable security practices of OT vendors.

As #manufacturers shift to an Industry 4.0 model, threats to OT networks are likely to increase. But modern industrial #security appliances will provide an effective and affordable way for businesses to defend themselves. @NEXCOMUSA via @insightdottech

In addition, joining a secure IT network to an OT network introduces problems of its own. “When everything is connected,” says Ma, “cybersecurity events that would have been easily contained on the IT network can now spread to the OT network—and OT networks are relatively fragile.”

But an expanding OT attack surface is an unacceptable risk for manufacturers.

The Challenges of OT Security

One of the surprising things about OT security, given the well-known difficulties, is how similar it is to IT security.

The cyber threats to OT networks, for example, mirror those faced by IT networks: ransomware and viruses, hacking and backdoor software, worms, and botnets. And the basic solution to OT security is like the approach used on the IT side: Monitor network traffic for suspicious data packets, segment networks so that malicious packets can be contained when they are detected, and place critical assets behind extra layers of protection.

Why, then, is OT security so challenging?

A big part of the problem has to do with the technical limitations of OT endpoints. “Many of these systems were not designed with security in mind,” says Ma, adding that legacy OT assets in factories often run on nonstandard or archaic operating systems, making it “impossible to install security software on them.”

Another challenge comes from the business culture of industrial facilities themselves. The KPI that matters most to plant managers is productivity. And downtime, however reasonable the justification, is expensive. Convincing leadership to take a network offline to upgrade security measures—or asking them to implement a solution that will require regular network outages for maintenance in the future—is a tough sell.

But this leaves manufacturers with a difficult choice. Should they accept costly downtime to improve OT security, or roll the dice and risk a total shutdown later on?

Obviously, neither option is a good one. But a new breed of industrial security appliances—rugged, flexible firewall devices designed to meet the needs of factories/plug-and-protect—may offer a way out of this conundrum.

OT Security with Less Downtime Eases Risk Management

NEXCOM’s Hwa Ya Plant implementation is a case in point.

Hwa Ya is NEXCOM’s smart manufacturing demo site—and also a working production facility. As such, it has all the usual physical challenges of factories:

  • a large footprint with many different types of equipment in constant operation
  • a harsh environment with high temperatures
  • cramped, hard-to-access spaces that complicate device maintenance

To secure the OT network at Hwa Ya, NEXCOM used its own ISA 140 industrial security appliance. Multiple units were deployed at key points throughout the facility to establish a micro-segmented OT network. The eSAF cybersecurity software package, developed by OT security specialist TMRTEK, was installed on the devices, allowing them to monitor and inspect OT network traffic in the same way that traditional endpoint security software does on an IT network (Video 1).

Video 1. Industry 4.0 uses solutions like ISA 140 for micro-segmentation and packet inspection to overcome OT security challenges. (Source: NEXCOM)

The result was a well-secured OT network with good visibility. But perhaps just as important, the Hwa Ya deployment demonstrated the business benefits of modern ISA 140 in a factory setting.

ISA 140 is compact and easy to install, so implementation doesn’t entail costly shutdowns or extensive infrastructure upgrades. And once in place, an out-of-band (OOB) remote management feature and bypass functionality allow OT security personnel to maintain the devices without disrupting the network.

Ma credits many of these benefits to NEXCOM’s technology partnership with Intel®: “The Intel Atom® processor that we used has built-in OOB functionality, which let us develop features that would minimize downtime without having to enlarge our circuit design.”

In addition, says Ma, the Intel chip was a good fit for the physical challenges of a factory setting: “The CPU is high performance, very reliable, and rated for extreme temperatures: perfect for Industrial control system (ICS) security.”

The Future of Industrial Cybersecurity

As manufacturers shift to an Industry 4.0 model, threats to OT networks are likely to increase. Bad actors are as eager as any enterprise to take advantage of a market opportunity. But modern industrial security appliances will provide an effective and affordable way for businesses to defend themselves.

And in the years ahead, as OT networks grow more complex and diverse, manufacturers will also have access to security equipment purpose-built for distinct, real-world use cases. “We’re going to see a trend toward specialization in OT security,” says Ma, whose company is currently expanding ISA 100 Series product line to enhance it with appliances specifically designed for wireless (ISA 141) and switch (ISA 142) networking security in OT.

“Sooner or later, everything in the factory is going to be on a single network. But with advances in industrial security technology, businesses will have the tailored solutions fitted to the various OT scenarios they need to make that network truly zero-trust—ensuring a secure future for industry 4.0,” says Ma.

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

This article was originally published on October 3, 2022.

Reinventing Smart Stores as a Medium: With VSBLTY

Fact or myth? Brick-and-mortar stores are dead and buried. Given the recent trend toward online shopping, it must be true, right? Wrong. Believe it or not, it is a myth.

Brick-and-mortar stores are alive and well. And they are only improving as they undergo a digital evolution and search for new ways to create “intimate” in-store customer engagements in a world that increasingly embraces the convenience of online shopping.

In this podcast, we take a close-up look at how offline and online are merging—and how this developing omnichannel relationship will impact consumer engagement going forward. Plus, we examine how retailers are successfully sifting through massive amounts of customer data to hyper-target messaging and drive in-store sales.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guest: VSBLTY

Our guest this episode is Jay Hutton, Co-Founder and CEO of VSBLTY, a retail technology solution provider. At VSBLTY, Jay works with retail customers to realize the store as a medium and help brands drive impressions at the point of sale.

Podcast Topics

Jay answers our questions about:

  • (0:00)Intros
  • (2:56)Evolutions in the physical retail space
  • (4:18)The Store as a Medium movement
  • (7:20)Creating a complete omnichannel experience
  • (9:14)Benefits of Store as a Medium from a customer perspective
  • (11:13)Successfully undergoing a retail transformation
  • (13:23)Ongoing Store as a Medium collaborative efforts
  • (15:57)The IT and technology investments for Store as a Medium
  • (18:12)Store as a Medium customer examples
  • (27:05)Final thoughts

Related Content

To learn more about ongoing retail transformations, read Retail Digital Signage Gets an Upgrade with Computer Vision. For the latest innovations from VSBLTY, follow them on Twitter at @vsbltyco and LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Associate Editorial Director of insight.tech, and today we’re talking about retail stores as a medium. Here to tell us more about what this means is Jay Hutton from VSBLTY. Hi Jay, welcome to the show.

Jay Hutton: Thank you, Christina, it’s my pleasure.

Christina Cardoza: Before we dive into the conversation, I want to learn a little bit more about you. So, what can you tell us about your role, and the company VSBLTY?

Jay Hutton: Well, me personally, I’m a 25- 30-year-long suffering tech entrepreneur, serial startup kind of guy. I’ve worked for large companies, but I’ve also, I mean, I feel most comfortable when I’m digging the ditches of small companies. That’s my place and my role in this world. I’m the founder and CEO of VSBLTY going back to 2015, and we scanned the horizon of the tech space at that time, and we tried to figure out a place where we could provide meaningful growth and progress in a suite of software. And we identified at that time that the digital-signage domain was about to explode. Thankfully, we got that one right. It was, and the traditional players had really atrophied, Christina. They hadn’t evolved their product. And so we felt there was an opportunity to come in with the best practices and really capable software that was able to perform in a way that satisfies and addresses a pain.

That’s often what you’re looking for as an entrepreneur. You’re looking for a problem to be solved, a piece of pain that can be addressed. And we then got gifted with the idea of computer vision, bringing to digital signage the ability to measure consumer experiences. So that is the creation of the company. We’ve been in business now for about seven years and still consider ourselves to be in the startup domain, although, honestly, we’re probably more in the emerging—if you could discern the difference, I suppose, startup is the initial phase—emerging as the phase that comes after that, however long that takes. Yeah.

Christina Cardoza: I know you were joking maybe just a little bit when you said you’re long suffering in this space, but I can see how it could be a hard job to do or to take on, especially in the retail industry. We’ve seen so many transformations happening little by little with self-checkouts, but now, over the last couple of years, stores are having to transform even more and compete. Customers have this new expectation, they want convenience, and with online shopping they can get that without even leaving their home. So that’s having physical stores having to rethink: “How do we get people in the stores and make things more convenient for them?” So, to start off the conversation, I’d love to hear your thoughts on the recent trends in the retail space, and how physical stores have been able to compete with online shopping.

Jay Hutton: Physical stores are not dead, nor are they dying. There’s a modification of our consumer behavior for sure, which is resulting in some amount of commerce to be fulfilled through an online presence. But that doesn’t mean the store is dying; it’s evolving. And really, frankly, the pandemic has caused retail to really look at their consumer experiences, the customer journey, and modify it in a way that—I don’t want to say it’s more like online—but it’s more like online; it is delivering to the customer immediate response, immediate engagement with brand in a way that the brands value and the consumers value. So there’s this merging of online with offline in a way that is requiring the store to be reinvented more digitally embracive, more consumer engagement, more consumer-centric, which I think is a challenge to a lot of traditional retailers. But I’m delighted to report that they’re really stepping up to the challenge in all ways, I think.

Christina Cardoza: And as part of that evolution—we teased in our intro—stores are becoming more of a medium, and I know this is a mission of the company, is to transform the retail stores—stores as a medium. So, what do we mean by that, and what goes into this new transformation?

Jay Hutton: The store has always been a medium for messaging. In the past, that has taken the form of poster boards or stickers on the floor in front of the Tide detergent. We’ve all seen it. And it has been meaningful in the way it has redirected brand spend. Brands spend money to drive impressions at the point of sale, at the moment of truth where you are most likely to be influenced by a message. What’s different in the last two or three years is how all that’s becoming digital. And so we’re talking about stores embracing digital surfaces. Could be a digital cooler, a transparent LED in a cooler, it could be an endcap that’s got a digital screen embedded. It could be shelf strips that are interactive, that drive attention, gaze, and engagement at the point of sale.

These are always in which stores are embracing and investing in turning the store into an advertising medium. And what does that mean? This is what we say when we say it’s an advertising medium. We know the internet is an advertising medium. We know that broadcast TV is an advertising medium. We know print is a less meaningful, but still an advertising medium. We know that billboards on the side of highways are an advertising medium. We’re now at a point where the store itself is an advertising channel.

So when the big brands like Unilever, Coca-Cola, PepsiCo, etc. are making decisions on which channel they invest in, now the store is a legitimate channel to invest in because it is where the consumer makes decisions, it is where they can be impacted, is where the brand can deliver a brand narrative. These are all really valuable. There’s not a brand on the planet that doesn’t want a more intimate engagement with their consumer, which is exactly what the “Store as a Medium” is.

And so we’re relieving from the stores the responsibility of investing in the infrastructure. Instead, that gets invested in by parties that are interested in building up the channel, and then it becomes viable as a media channel. So that brand manager who’s responsible for a significant budget makes a decision about whether or not this specific campaign gets delivered through print, through out-of-home (outdoor), through store, through the internet, or maybe all of them. And that’s the big sea change.

Christina Cardoza: I love hearing about all of these physical digital transformations, like you mentioned: the digital shelves or digital signage, digital coolers. But I’m curious, as a lot of retailers have started on their digital transformation journeys, omnichannel has been a big focus for them, and that’s sort of blending their online storefronts with their physical storefronts. So how does “Store as a Medium” fit into that retail omnichannel experience?

Jay Hutton: Well, we all knew that the game would change when we were able to measure audiences. So, as you frequently do in the evolution of technology, you’re kind of waiting for the technology, right? You’re waiting for the technology to keep up or catch up to the demands of the marketplace. And Intel® among others have proven leadership in delivering high, powerful, high-capacity, powerful processors at the edge. So now we’ve got the ability to draw inferences, computer vision, looking at audiences and deriving meaningful data. How many men, how many women, how many 25-year-olds, how many 35? Not privacy data, not data that would make any of us feel creepy, but data that is relevant to a brand.

So we all knew that once we cracked the code of that, that it would open up the store as a valuable medium and now realistically become among the channels that are represented by this phrase, “omnichannel.” It wasn’t before and now it is. And now we’ve got this opportunity to drive really meaningful insights—what brands would call the data dividend. Not only are they interested in delivering advertising at the point of sale, but they’re interested in lift. They want to sell more stuff and they’re interested in this unbelievably complex and robust data set that they’ve never had before. And they’ve really evolved. All brands on the planet have evolved to a point where data—knowing more about their customer—allows them to segment, laser focus, and understand their customer engagement much more acutely than ever before.

Christina Cardoza: That’s a great point. Being able to get those instant customer behaviors in real time can allow brands to sort of change messaging on the fly. But I can imagine it can also provide personal services for the consumer looking at that signage or in that digital shelf cooler. So, what—can you talk about some of the benefits that the customers get over this too?

Jay Hutton: Sure. So we talked about brands to get lift to get more data and consumer engagement. The brands begin to have a direct and meaningful dialogue with the customer. In a world where there is no consent, no consent is secured from the customer. We’ve got a bunch of very focused marketing that can be delivered to the customer as if that customer is a member of a group, a gender group, an age group, whatever.

But in a world where we’re getting consent, maybe we’re aligning a loyalty app with what we’re doing on the digital display. Now we’ve got a customer that’s consenting to get personalized advertising, and that’s meaningful to a customer. That’s what’s in it for the customer. Now it’s not just general broadcast, shotgun advertising; now it’s laser focused. Jay likes Coca-Cola more than he likes Pepsi, so I’m going to drive coupons, digital coupons, or I’m going to drive campaign promotion to him specifically because of his brand affiliation and because of his brand interests.

This is really the first time we’ve begun to drive consent-based advertising in a way that consumers value. I don’t care about stuff I don’t ever buy; I’m not influenceable necessarily at the point of sale. But if I get choices on brands that I’ve already made, have a predilection or a preference for, then that’s more meaningful to me as a consumer.

Christina Cardoza: So how does the company VSBLTY and retail stores and brands—how do you actually make this happen? What are the components? You mentioned digital cooler, shelves, signage. What are the technology components that these stores and brands really need to have to make this all possible?

Jay Hutton: That is perhaps the most significantly complex part of the business model, that took a couple or three years to figure out, and this is why we work with WPP and Intel and others that are stakeholders in this overall problem and have figured out some of the components.

So let’s talk about what’s meaningful to a retailer. Retailers function their business on a 3% to 4% gross margin. Like it’s a very, very thin margin. So, what is the probability that a retailer is interested in a multimillion-dollar capital infrastructure for digital overlay? But what’s the probability? Almost zero unless you’re Target, Walmart—some of the big players who really understand media and take a very sophisticated approach to media. Unless you’re in that 1% of 1% of retailers, you’re not interested in doing all the heavy lifting associated with the digital transformation.

So then the hypothesis was if a group of us called the “Store as a Medium” consortium could get together and solve those problems on behalf of retailers, therefore creating a media infrastructure, capitalizing it, deploying it, managing it, even doing brand-demand creation for the media network, it seems to me that that would satisfy all the requirements and therefore it simplifies their value proposition to a retailer by saying, “You don’t have to do anything. We’ll open up the doors. Let’s have an agreement to do this together over three, four, five years. Let’s do it at scale—5,000 stores, 10,000 stores—and together we’ll create this channel.”

That is the evolution of the value proposition that we’ve created over the last several years. And, of course, it’s based upon a foundation of mistakes and learnings and evolution of thought. And we’re really at a point now where we’ve got a really unique offering amongst the group of companies, and an opportunity to really lead this category—not only with a practical application, but with the thought leadership that this requires at the moment.

Christina Cardoza: So does that “Store as a Medium” collaborative effort exist today beyond VSBLTY?

Jay Hutton: Sure. I don’t know beyond VSBLTY—we’re certainly among the players that are part of driving it. And so, do others understand this? Of course. Boston Consulting Group said this—says this is a $100 billion market by 2025, and it’s under $5 billion today. Even if that statement is hyperbolic, we know it’s exploding. There’s every indication that it’s exploding. So this is no longer a whiteboard exercise. It was, “We’re doing this now.”

Our largest deployment with Intel is in Latin America, where together with Anheuser-Busch, who are, interestingly, Christina, both a CPG and a bricks-and-mortar retailer in Latin America. So they own physical bricks and mortar. We really couldn’t find a better partner than them, because they actually speak both glossaries. They have the vernacular of a retailer, and they have the vernacular of a CP—of a consumer packaged—goods brand. So, together with them, we’re building a network of stores which will reach 50,000 stores by the end of year four. And were we to reach that objective, and I firmly believe we will, it’ll be the largest deployment of a retail-media network on the planet. And I think we’ll represent a leadership position with respect to growing this. And remember, we’re doing this in Latin America, where it’s not modern trade; this is traditional trade. This is a 10-square-meter convenience store on the side of a dirt road in Guadalajara. If we can do it there, it gives us a leg up on doing it in places where it’s got a less challenging environment.

So we are leading, and here in the US we’ve signed together—the consortium—we’re working together to deploy a 2,800 location fuel and convenience. We’re also working on traditional c-store, which by the way is probably going to be one of the early adopters of this category, because they don’t have the complexity of 110,000 skews that a large grocery might have. They might have 6,000 skews, where it’s just manageable; it’s more bite size. And so there’s an opportunity to do it there, and we’re delighted by the leadership we’re getting from Intel and others as we drive this idea and mandate.

Christina Cardoza: I’m very familiar with Anheuser-Busch brands—not too familiar—they’re beer brands. So I’m very interested because I didn’t know they were a physical store either. I want to come back to that and hear more about how you worked with them. But, first, going back to the retail stores where you have brands making these technology investments and bringing those into the store, are they leveraging any of the retail existing technical infrastructure? Or when they bring in these components is it brand new?

Jay Hutton: Everyone has the fantasy that existing infrastructure can be leveraged, but generally speaking we discover that is not the case. The Wi-Fi supplied in a Target or a Kohl’s or Walmart usually sucks. And we would have to deploy on top of that in order to get the dedicated access to bandwidth we would need. Now we’re edge, so we don’t have a disaster-recovery problem if the network goes away, but if you’re driving new content—to your point made earlier, adjusting content and creative on the fly—well then you need internet access to do that. And if we’re functioning over an in-store Wi-Fi that’s got consumers on it and the point of sale on it, it’s not workable.

So we have that fantasy that we’d be able to do that and therefore lower the cost of the total capital expenditure. But we no longer have that fantasy. Camera and network obviously exists for the purposes of loss prevention in retail, but, generally speaking, Christina, they’re up in a 30-meter ceiling or a 25-foot ceiling, and they’re looking down on heads, not direct on faces. And so when we deploy our technology, we generally deploy it with camera. And, again, we’re not picking up privacy data. We’re only picking up demographic data, which of course is useful to the brand to understand the overall, macro buying behavior, which of course is—that’s the yield, that’s the data dividend we spoke about earlier.

So for the most part this is new build, but new build, I should hasten to add, that we’re removing the capital-expense responsibility for from the retailers. So if they deliver us a number of stores that is large enough, we’ll go and assemble the capital necessary to make it happen. And I think that’s probably the most important part of building this consortium—to have a legitimate group that’s got—and everybody playing their part—have got the ability to deliver these kinds of networks on scale.

Christina Cardoza: So let’s go back to the Anheuser-Busch example, or any other customer use cases that you have. What were the challenges that they were facing? Why did they come to VSBLTY? And what was the result of your partnership with them?

Jay Hutton: Well, well if you scan the globe from the American perspective—this is difficult to understand—but to start with what problem we are solving: in America, 65% to 75% of overall retail fulfillment. So, the entire commerce landscape is fulfilled by big box—Walmart, Target, Kroger. In the rest of the world, with the exception of Western Europe, it is fulfilled by traditional trade—what you might call mom and pops. So it’s completely reversed in the rest of the world. We know modern trade has a really, a good capacity and very good sophistication as it relates to the deployment of technology. But mom and pops—we knew that if we could solve that problem with the assistance of folks like AB InBev—Anheuser-Busch—that we would have a global runway, we’d have green fields that would extend to a global landscape. And really, that was the challenge.

So what’s the problem? The problem is there’s virtually no technology adoption in mom and pops in traditional trade. There’s not even point of sale in traditional trade, Christina. So there’s very little visibility, if you’ll allow me the pun. There’s very little visibility to what’s happening in traditional trade. So the deployment of camera technology initially satisfies the requirement of doing screen-based advertising—Corona, Heineken, you can imagine. But now I’ve also got a virtual window into the retail, which means that I can layer on other capabilities—planogram compliance, fraud compliance, POS.

So we’re just at the beginning of this technology adoption, which started with a revenue-generating platform called “Store as a Medium.” There’s other things we can do, all part of this remote-execution mandate, which is really critical for traditional trade. But we’re excited by the fact that we start with a revenue-generating model. And to have AB InBev—Anheuser-Busch—as a side-by-side partner for us allows us to tell that story with just considerably more legitimacy. We’ve got the ability to do this and deliver it. Right now we’re just over 2,000 stores in at about eight months. So, pretty good deployment so far. We’re going to accelerate it, but we’re happy with where we are at the moment.

Christina Cardoza: Great. And we’ve been talking about beverage stores and grocery stores, but I can also see a use case for this in other retail stores. I’m thinking like a cosmetic store—helping workers. I know when I go into a store, I want to know more about a product. Or you learn more about what’s going to be the best for my particular features. The workers are sometimes caught up, or I don’t normally want to go up to a human person and speak to them. So I can see digital signage, or some of these solutions giving you a lot more information and freeing up employees from doing other important tasks.

Jay Hutton: If there’s one category, if there’s one brand category, that can afford the investment in the digital infrastructure, it’s health and beauty. The margins are out of this world. There’s a technical sell, right? Because it’s not just pasta. And there’s a labor problem right now to getting skilled labor to be able to perform that role at the point of sale. So there’s an adoption happening in health and beauty that’s happening, that’s outpacing everything else, because it does have that ROI. And if there’s a place where the brands themselves will underwrite the cost of the digital infrastructure, it’s in health and beauty. Because there’s an opportunity there in a marketplace that has really kind of ridiculous gross margins; where they can invest and the ROI is almost immediate; and there’s an education issue right at the point of sale. You want to educate at the point of sale.

So, look for health and beauty to be probably a brand leader in the category. This doesn’t necessarily run in contrast to a grocery deployment or a big box deployment, because health and beauty can be co-resident—they can do it together. But we’re going to see one of the brands that’s leading the way will be health and beauty. One of the brand categories that leads the way is health and beauty.

Christina Cardoza: I can definitely see that. You’ve mentioned that we have computer vision models running, making all of this possible, gaining that customer analytics and behaviors, and they’re analyzing that data at the edge to make it fast and make it real time. And I know these are big areas for Intel—and I should mention the IoT chat and insight.tech as a whole, we are sponsored by Intel. So I’m curious how your work with Intel has made “Stores as a Medium” possible, and all these initiatives that you’re bringing to customers possible.

Jay Hutton: Intel has enormous global reach. If we’re having a particularly difficult time reaching the C-suite of a retailer, Intel can get there because they have a team dedicated to ensuring thought leadership. They’re not necessarily a company that’s—of course, at the end of the day, Intel wants to move silicon. But you would be surprised, or one may be surprised, as to the level of expertise—subject matter; narrow, specific vertical expertise—that Intel develops, and we lean on them all the time.

And of course the legitimacy they give to us—not only in domestic engagements but internationally as well—helps us enormously. When we can say we’re a side-by-side partner with Intel, and proud to be the 2022 Intel Channel Partner of the Year, VSBLTY is—it gives us a letter, a degree of legitimacy that gets us into the conversation. So—and also Intel has a track record of putting their money where their mouth is—when it comes time to really manifestly drive that thought leadership in a trade event or a speaking event or a published document, Intel will always be there with us, assisting us wherever we need that assistance, and we’re enormously gratified to be in that position.

Last thing I should mention, just occurred to me is, as I was making my last remarks, on the technical side, the silicon is evolving, right? And today we’re in a certain family of silicon that drives our solutions, and we can already see the next layer of silicon coming and we get early access to that. So by the time it’s available in production, we have a product that can run on it, which to us is an enormous advantage from the competitive point of view. We all have competitors, and for us to be able to run on the chip set that was released, like, last month, we’re already able to run on it, and a production variant is a huge advantage for us.

Christina Cardoza: I’m glad you brought up those technology advancements, because it sounds like we may be in the early phases or at the beginning of bringing these technologies and these transformations in the physical store, and technology is only going to get better. So, what do you think we can expect from this space, or what will happen to make “Store as a Medium” truly a reality?

Jay Hutton: So, it’s no longer conjecture, it’s no longer guesswork. We were in the guesswork/conjecture category in 2015, 2016, but there’s been enough evidence that this model works that we’re now looking at large-scale deployments. If you just look at Amazon and Walmart, between the two of them—and I may get my numbers slightly wrong—but $2 billion each in advertising revenue that was not there in the previous year. So, if you’re ever doubting the veracity of this category, just look to that. And really others are going to follow that, because if you’re not afraid of what Amazon and Walmart are doing and you’re in retail, then you’re just not paying attention, right? So they’re leading the charge with respect to that.

And I think that this, as I said earlier, there’s no longer any guesswork about whether or not this category will take off. The challenge now is the speed. And you’ve heard this a hundred times in technology in your career probably, but it’s a land grab at the moment. It’s getting contracted retailers to sort of do the dance with you and commit to you long term. And that’s going to be the difference between the leaders and the also-rans in this category. It’s the speed with which adoption can be secured, deployment can be secured, and revenue can start to happen.

Christina Cardoza: Well this has been a great conversation, Jay. I’m excited to see some of the technology come to my own local stores. But, before we go, are there any key takeaways or final thoughts you want to leave our listeners with today?

Jay Hutton: Well, just to strap in, because your retail experience is about to change. It’s going to become more experiential; there’s going to be more for you for the customer journey. And if you decide to opt in to some kind of loyalty program, it will become profoundly more personalized to you. And that experience will extend to your home, where you’ll be able to engage with brand from the comfort of your home, if you wish to. And that whole customer journey, that whole engagement modality begins at bricks and mortar, and it’s unable, it cannot begin in an online experience. So, what we’re now able to offer you in an offline world is the thing that we only fantasized about offering just three or four years ago. So, the entire experience will change, and retail is not going anywhere. Bricks and mortar are not going anywhere.

Christina Cardoza: Great final point. And with that, Jay, I just want to thank you for joining us today on the podcast.

Jay Hutton: Thank you, Christina. My pleasure.

Christina Cardoza: And thanks to our listeners for tuning in. If you liked this episode, please like, subscribe, rate, review, all of the above on your favorite streaming platform. And until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

The Key to Successful IoT Projects: Edge Computing

IoT can be groundbreaking technology. By harnessing data from machines and acting on related insights, companies can fine-tune business operations. And with machines no longer in a black box, they can alert and issue warnings when processes go awry.

But the devil’s in the details. IoT might promise radical operational efficiencies, but too often companies don’t realize that they need a robust infrastructure framework for the technology to really do its job. When built on a shaky foundation, IoT projects collapse—or stall.

IoT Challenges: Why Projects Flounder

Failed IoT projects come in all shapes and sizes. Sometimes companies neglect to factor in the cost of data transfer and computing in the cloud over a product’s entire lifespan, according to Rodney Hess, Technical Architect and Development Lead at Beechwoods Software, an embedded services and solutions provider. “By the time they realize that they need to change their service model and find a way to pay for it, they get backlash from their customer base,” he explains.

Being hamstrung by decisions not fully thought out is not the only challenge enterprises face. Data from machines is valuable currency since it is an indicator of machine health. But the dizzying number of formats in which data travels also complicates matters as data that can’t be read and mined for value is just plain useless.

In addition, Hess points out, “we’re in a world where every week we have a shiny new security patch that needs to be applied to systems.” When the lifespan of a project can run up to 20 years or more, the costs of such security firefighting add up quickly. Companies are justifiably terrified of leaving legacy systems and protocols vulnerable to security challenges.

Last, machine learning programs are sometimes a one-trick pony and “start conflicting with evolving requirements and needs,” Hess says. “If solutions can’t be easily updated or changed, suddenly you have hardware that becomes obsolete really fast,” he adds.

As a result of these many challenges, companies opt for the “safe” option and drop or stall IoT projects altogether.

But it needn’t be this way, says Mike Daulerio, Vice President Marketing and Business Development at Beechwoods. Edge computing is fast emerging as a potent solution to these various IoT-related data challenges. 

The Benefits of Edge Computing to IoT Projects

As enterprises grapple with the high costs and latency of transferring IoT-generated data to the cloud, they are giving edge computing a closer look. “There’s a dissonance where companies have a lot of data and want to get it to the cloud, but it’s too expensive. It’s just not feasible,” Hess says. Relying too heavily on the cloud also endangers business continuity, he says. “What do you do when your Internet access goes down? Suddenly business logic stops working, you’re sitting on data and not getting anything out of it. That’s a big problem,” he explains.

Edge computing solves this IoT challenge by bringing the computing closer to the source of data—the edge. Doing so “helps reduce the messaging costs of getting data to the cloud,” Hess says, and in doing so, makes IoT computing scalable. Instead of spending time and money ferrying data back and forth, computing and insights happen closer to the source of action.

As enterprises grapple with the high costs and latency of transferring #IoT-generated data to the cloud, they are giving #EdgeComputing a closer look. @BeechSoft via @insightdottech

While edge computing is not a new concept, advances in microprocessors have improved its utility and accelerated its adoption, Hess says. “Embedded processors have crossed the threshold where they’re now capable of running machine learning algorithms, so you don’t need a room full of servers to crunch these algorithms,” he says.

Another advance is that the machine learning algorithms “have been refining themselves to the point where they’re more effective at getting the answers to the problems we’re looking to solve,” Hess explains. 

Overcoming IoT Project Challenges

Beechwoods offers the edge computing platform EOS, which is based on EdgeX Foundry, an open-source framework that helps interoperability between IoT devices and applications. EOS aims to address several IoT-related challenges that customers face, according to Hess. For one thing, it provides a protocol gateway so different types of data from legacy and modern machines can talk to one another.

The platform also verifies identity through secure APIs so only authorized devices and people can access the data. Enterprises can run different sets of machine learning analytics programs to meet evolving needs.

In addition to providing software, Beechwoods provides system integration services so IoT projects can reboot after lurching stops-and-starts.

For example, Beechwoods delivered its EOS edge IoT solution to a startup that was developing smart locker appliances installed in exterior walls of homes and offices. The company needed connectivity components, camera sensors, and other control systems to truly make the product smart.

“We helped them take their idea for a smart locker and turned it into a proof of concept. With EOS as the tech platform, it was a straightforward path from product concept to demonstrable prototype,” Daulerio says.

Beechwoods leverages the Intel® Distribution of OpenVINO Toolkit for its EOS platforms and learning from new developments in the open standards front. “Intel provides us with some of the best-performing codes for video analytics and helps us build the best models for machine learning,” Hess says. “We can achieve the best results we can on our embedded processors because of the work Intel has done in this area.”

In addition, Hess is grateful that Intel is an active proponent of open standards within EdgeX Foundry, which Beechwoods has folded into its EOS offering. 

A Truly Smart Future

With IoT and edge computing rapidly gaining ground, expect the future to be truly smart, Hess says. “Because of the ability to have a lot of embedded devices all around us running algorithms, we will have an environment that is truly smart and responsive and intuitive, that addresses our needs and concerns right away,” Hess says.

These environments could be a smart home, or a factory floor tuned to occupational safety standards and continually routing guidance to workers about unsafe areas. Dynamic operational changes need dynamic smarts. With the help of IoT and edge computing, it’s where the future is headed, Hess explains.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Moving Machine Vision from Inspection to Prevention

Fifty percent of a modern car’s material volume is plastic. And the vast majority of that—from oil pans to bumpers to dashboards—are fabricated through a process called plastic injection molding.

Just as it sounds, plastic injection molding machinery inserts molten plastic into a rigid mold, where it is allowed to set. The setting process can take anywhere from hours to days. Quality checks usually happen at the end of the production line, where inspectors manually deconstruct samples from each batch to look for defects.

“They’re taking two or more parts per shift off the line, destructively testing them, and making a call on whether the parts that were produced that shift were good or bad,” explains Scott Everett, Co-Founder and CEO of machine vision solutions company Eigen Innovations. “It takes basically an entire day just to get through a couple of tests because they’re so labor-intensive and then you only end up with measurements for two out of thousands of products.”

At first glance, this seems like an application where machine vision cameras could make quick work of an outdated practice. But while the concepts behind plastic injection molding are relatively simple, it’s a complex process. For example, injection molds are susceptible to physical variations in raw materials, temperature and humidity changes in the production environment, and slight operational inconsistencies in the manufacturing equipment itself.

The goal isn’t just to identify that a part is defective, but to provide useful quality analytics about the root cause of defects before hours of bad parts are produced. By monitoring every fabricated part, you can start to predict when the process is at risk of producing defective batches. But the number of variables in play makes this difficult for machine vision cameras unless the information they produce can be contextualized and then analyzed in real time using visual AI.

Beyond the Lens of Machine Vision Quality Inspection

Like all applications of visual AI, developing an ML video analysis algorithm starts with capturing data, labeling it, and training a model. On the plus side, there’s no shortage of vision and process data available during the production of complex parts. On the downside, the mountain of data that’s generated can contribute to the problem of identifying what exactly is causing a manufacturing defect in the first place.

Therefore, an ML video analysis solution used for predictive analytics in complex manufacturing environments must normalize variables as much as possible. This means visual AI algorithms need information about the desired product outcome as well as the operating characteristics of manufacturing equipment, which would provide a reference from which to analyze parts for defects and anomalies.

Eigen Innovations’ industrial software platform captures both raw image data from thermal and optical cameras and processes data from PLCs connected to fabrication machines. This data is combined to create traceable, virtual profiles of the part being fabricated.

@EigenInnovation’s machine vision #software platform is already paying dividends at major automotive #manufacturers and suppliers worldwide where it’s saving time, cost, and reducing waste. via @insightdottech

Then, during the manufacturing process, AI models are generated based on these profiles and used to inspect parts for defects caused by certain conditions. But because the platform is connected to the manufacturing equipment’s control system, visual inferences can be correlated with operating conditions like the speed or temperature of machinery that may be causing the defects in the first place.

“We can correlate the variations we see in the image around quality to the processing conditions that are happening on the machine,” Everett says. “That gives us the predictive capacity to say, ‘Hey, we’re starting to see a trend that is going to lead to warp, so you need to adjust your coolant temperature or the temperature of your material.’”

Inside the Eye of Predictive Machine Vision

While Eigen’s industrial software platform is an edge-to-cloud solution, it relies heavily on endpoint data so most of the initial inferencing and analysis occurs in an industrial gateway computing device on the factory floor.

The industrial gateway aggregates image and process data before pushing it to an interactive edge human-machine interface, which issues real-time alerts and lets quality engineers label data and events so algorithms can be optimized over time. The gateway also routes data to the cloud environment for further monitoring, analysis, and model training (Figure 1).

Diagram showing how Eigen’s machine vision platform sends data for edge and cloud insights
Figure 1. The Eigen Innovations machine vision platform is an edge-to-cloud predictive analytics solution for complex manufacturing environments. (Source: Eigen Innovations)

Eigen’s machine vision software platform integrates these components and ties in industry-standard cameras and PLCs using open APIs. But the key to allowing AI algorithms and their data to flow across all this infrastructure is the Intel® Distribution of OpenVINO toolkit, a software suite that optimizes models created in various development frameworks for execution on a variety of hardware types in edge, fog, or cloud environments.

“From day one we’ve deployed edge devices using Intel chipsets and that’s where we leverage OpenVINO for performance boosts and flexibility. That’s the workhorse of capturing the data, running the models, and pushing everything up to our cloud platform,” Everett says. “We don’t have to worry about performance anymore because OpenVINO handles the portability of models across chipsets.”

“That gives us the capacity to do really long-range analysis on hundreds of thousands of parts and create models off of those types of trends,” he adds.

The Good, the Bad, and the Scrapped

Eigen Innovations’ machine vision software platform is already paying dividends in manufacturing environments at major automotive manufacturers and suppliers worldwide where it’s saving time, cost, and reducing waste.

Rather than producing batches of injection-molded car parts only to discover later that they don’t meet quality standards, Eigen customers are alerted of anomalies during the fabrication process and can take action to prevent defective parts from being created. And it eliminates the time and material scrapped during destructive quality testing.

“Our typical payback per machine can be hundreds of thousands, if not millions, of dollars on really large machines where downtime and the cost of quality stacks up very quickly,” Everett says. “And it’s as much about providing certainty of every good part as it is detecting the bad parts.”

“We’re approaching a world where shipping a part with insufficient data to prove that it’s good is really just as bad as shipping a bad part because of the risk factor,” he adds.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Smart and Sustainable Buildings: With Johnson Controls

Did you know that commercial and industrial buildings contribute to almost half of the world’s carbon footprint? When you think about it like that, you can see why more and more businesses are committing to aggressive sustainability goals. And the best way to achieve net zero or carbon neutrality is by creating smart, connected, and sustainable buildings—especially in today’s hybrid work environment.

For example, if only half your workforce comes to the office at any given time, it doesn’t make economic sense to keep all the lights on all the time. And it’s more than just lights. It’s HVAC systems, air quality, security systems, and power supply that all play a role—from office spaces to bathrooms, cafeterias, and even parking lots.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

In this podcast we look at what a smart building means, technologies that make a building “smart,” and the role buildings play in larger sustainability efforts.

Our Guests: Johnson Controls and Intel®

Our guests this episode are Graeme Jarvis, Vice President of Digital Solutions at Johnson Controls, a smart building solutions provider; and Sunita Shenoy, Senior Director of Technology Products within the Intel®’ Network and Edge Computing Group.

Graeme has been with Johnson Controls for more than eight years, focused on helping customers transform building spaces with sustainable, safe, and secure experiences.

Sunita has been with Intel for more than 16 years and has deep experience in delivering technology products that help create innovation for an ecosystem of partners in multiple verticals such as mobile, education, enterprise, automotive, and industrial.

Podcast Topics

Graeme and Sunita answer our questions about:

  • (3:19) The challenge of today’s hybrid work environment
  • (7:25) Energy use and sustainability challenges in buildings
  • (9:31) Efforts that lead to buildings becoming more energy efficient
  • (13:42) How technology drives smart and sustainable buildings
  • (16:59) How to leverage existing IT and connect disparate systems
  • (22:24) Smart and sustainable building use cases and examples
  • (24:45) Intel-led sustainability efforts and goals
  • (26:55) The power of partnerships

Related Content

To learn more about smart buildings, read The Future of Smart Buildings? Net Zero. For the latest innovations from Johnson Controls, follow them on Twitter at @johnsoncontrols and LinkedIn.

This podcast was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Transcript

Christina Cardoza: Hello, and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Associate Editorial Director of insight.tech. And today we’re talking about smart and sustainable buildings with Graeme Jarvis from Johnson Controls and Sunita Shenoy from Intel®. But before we jump into the conversation, let’s get to know our guests a little bit. Graeme, I’ll start with you. Welcome to the show.

Graeme Jarvis: Thank you, Christina. And pleased to be with you again, Sunita. My name is Graeme Jarvis and I’m Vice President within Johnson Controls’ global digital solutions business, where we are all about helping our customers realize their smart, sustainable business objectives within their built environment. My role is commercial-leadership focused. And so I engage with our large-enterprise customers globally and key-partner enablement programs, with Intel being a great example.

Christina Cardoza: Great to have you, Graeme. And I should mention before we get started that the IoT Chat and inside.tech as a whole are sponsored by Intel, so it’s always great when we can have somebody from the company represent and contribute to the conversation at hand. So, Sunita, welcome to the show. What can you tell us about yourself and your role at Intel?

Sunita Shenoy: Yeah. Thank you, Christina. And it’s my honor to be on this chat with Graeme and yourself. I’m Sunita Shenoy, I’m the Senior Director of Technology Products in Intel’s network and edge computing group. My organization in particular is responsible for technology products, requirements, and roadmaps as it relates to the industrial sector. Industrial sectors include manufacturing, utilities—buildings is an infrastructure within that sector from an energy-consumption perspective, as well as housing these critical infrastructures. So, my organization’s responsible for bringing the silicon products as well as the software technologies required to enable this transformation into smart infrastructures.

Christina Cardoza: Great to have you, Sunita, and I love how you both mentioned this idea of meeting business objectives within the environment, talking about buildings, which is the topic we’re talking about today, smart buildings. So, obviously remote work has taken a huge force in the last couple of years and it’s here to stay, but people have started to return back to work and started to return back to the office in this hybrid work environment, where not everybody is in the office all the time and not as many people are coming back to the office. So that gives a little bit of a challenge for businesses to figure out how to make the best utilization of this space. And you mentioned it, it has great energy consequences with it too. Do we need to be having all the lights on and all of these things operating in a building that’s empty at times, half full, and not to capacity. So, Graeme, I’ll start with you. What are the challenges businesses have to think about now in regard to their physical office space as people start returning to work in this hybrid environment?

Graeme Jarvis: Sure. It’s a great question, Christina. And it’s so relevant right now. COVID actually served as a catalyst for what we’re now going through, which is the new normal. So, what does hybrid work environment mean? I think there are two key components to that. One is around the people, be they employees, guests, building owners. And the other is around the built environment itself and how the built environment needs to adapt to the new normal, which is really as, as we see it, around sustainability, workplace experience, and then safety and security within the built environment.

So, before COVID-19 I think we’re all familiar with the fact that most of us worked at an office almost every day, and the pandemic proved that we can actually be productive from our home office or on the road, whether our home office is nearby, within the same country, or even abroad. That has been proven. So now the challenge is on the employee side for a hybrid work environment—what would that mean to me? I would like it to be appealing. I’d like it to be easy to go in and out of work. And so how one might—how might one do that? And it gets into key enabling technologies, touchless technologies, and having a sense of control over that.

We happen to have a solution called OpenBlue Companion, which is an app that allows employees and guests to do hot desking, book conference rooms, pretreat those conference rooms based upon the number of people that might be in there for a particular meeting. There’s cafeteria integration, parking and transportation integration, so that when one goes to the office it’s actually a pleasant experience. On the building side, the hybrid work environment is really financial. How do I optimize the footprint I have, and what am I going to need moving forward to support my employee team? And that’s where we are right now, is companies are trying to rationalize what they have and what they will need.

So, some of our solutions enabled by compute and AI from Intel, for example—we are able to understand what is in motion today, and give an assessment for a client around what they have and the efficiency of those solutions today based upon the outcomes they’re trying to realize. Then they have an objective. They would like to be more productive. They would like to reduce expenses. They would like to have a safe, sustainable workplace. So now you’ve got interdependencies around the heating, ventilation, air conditioning system, the number of people that happen to be in a building through access-control information—the time that cafeteria should actually be preparing food based upon the workload of people that are in the building. And all of this is interconnected now. And so there’s an optimization routine that starts to present for management around: What should my environment look like? How should it look in the future? And what we’re seeing today is a template for the building of the future. People are rationalizing and optimizing on what they have, and they’re taking lessons learned and starting to apply it for their “building of the future” —be it stadiums, be it ports and airports, be it traditional office space.

Christina Cardoza: Yeah. You bring up a lot of things to consider when going back to work. And I want to come back to OpenBlue and how you actually make these buildings more energy efficient. But, Sunita, I’m wondering, from an Intel perspective, what are the implications you’re seeing of a hybrid work environment as it relates to the business and energy usage?

Sunita Shenoy: Yeah. So, from, as Graeme was stating, and the working from home became the new norm in the last three years. But as all companies, all businesses are easing their workers back to work—be it hybrid work or remote work or on site—they have to make it comfortable for the workers coming in by having frictionless access, right? You don’t walk in and open the doors because now you need it to be safe from bugs, right? So you make it frictionless. You use advanced technologies like wireless connectivity, you use advanced technology like AI to make it easy for improving the quality of your workspace, whether it’s your hybrid desk or whether—how you find your rooms, or whether the building, if the building is retail, for example, how do you find your way around without being in a crowded environment, right?

So, making it easy to use data and AI and technology such as wireless and mobile for workers to ease into the workplace because they sort of got comfortable being in their own spaces, right? In fact, a lot of the stories I’ve heard is, okay, my office at home is more comfortable than my office at work. So how do I make my environment at work as comfortable and safe for them as it is in their home? So that’s really the implication, and technology can play a big role in implementing these solutions. But deployment is one of the key areas that we need to focus on, is how do we make it easily deployable using solutions like Johnson Controls solutions with our technologies?

Christina Cardoza: Absolutely. And it comes to my attention that there may be even larger implications. Say if you have a building where there’s multiple different businesses—it’s not your business that owns the building. And I think that brings up the question of who’s in charge of making a building smart or reducing the energy consumption. And is it the building owners, or is it multiple businesses within the building? So, Graeme, can you talk a little bit about how buildings can become more energy efficient, and who’s really in control: businesses or building owners?

Graeme Jarvis: I would start off by saying most businesses have an ESG—or environmental, social, and governance—plan or a set of objectives. Johnson Controls does. I know Intel does. And these are used as a means to communicate value-based management practices and social engagement to key stakeholders. So, employees, investors, customers, and potential employees also. We at Johnson Controls, we’ve adopted a science-based target and net zero carbon pledge, to support a healthy, more sustainable planet over the next two decades. So our efforts align with the UN sustainable development goals, and to date since 2017, where we indexed, we’ve reduced our energy intensity by about 5.5%, and our greenhouse gas emissions by just over 26%. And we have a plan to get to carbon neutral as part of our 2025 objectives, realizing that that carbon neutral state will take longer, but that is part of our ESG plan.

So the reason I mention that is once you get into the built environment, somebody owns that building and they’re going to have something to do with a sustainability footprint objective because, one, it’s the right thing to do. But, two, the economics are motivating businesses to act because you can be more efficient, thereby saving money. So how would one do that? We help in that regard because buildings account for about 40% of the planet’s carbon footprint. So if we want to go and start talking about how to solve sustainability challenges, the building, the built environment is top of mind. It’s close to the top in every study.

So, once you’re in, you’ve got certain equipment that’s running: heating, ventilation, and air conditioning systems. You have multiple tenants within that building. They all typically pay a fee for the energy consumption for the space they use, but it’s relatively binary, or it’s a step function based upon historical patterns. But what if you could give them insight to what their real usage could be based upon seasonality factors, how many people are in the building, when they’re in the building, when should I treat the air because I’ve got a meeting room that’s booked, and you give them control.

And some of our solutions through OpenBlue help enable clients to understand what is actually going on in their environment and where are areas that they can improve. As soon as that data becomes available and there’s a financial consequence or a financial reward, then behavior starts to change. And that’s where it comes back to, how do you enable against that behavior that you want? And then you get into the hardware, the software, the compute and AI that Johnson Controls can help with and Intel can help with. But it really starts with that ESG charge. And the fact that buildings are a large opportunity from a sustainability-improvement standpoint.

Christina Cardoza: When you think about how much energy and carbon emissions buildings give off, like you just mentioned Graeme, about 40% of the carbon emissions, I can see why businesses are setting such aggressive sustainability goals to reduce that. And, Sunita, you mentioned to be able to tackle this problem and make a dent, you really need to deploy the right technology to get the data points from all of those different systems that Graeme was talking about. So, can you talk a little bit more about the technology that goes into these businesses, making a dent towards those efforts?

Sunita Shenoy: Yeah. Yeah. I think Graeme touched upon some of these, right? So it’s not just now because of hybrid and pandemic that we are realizing this, but this is a known fact, right? That the carbon footprint is generated—emissions are 40% to 50%, I don’t know what the actual numbers are—but commercial and industrial buildings contribute to a vast majority of the carbon emissions, right? So it is our corporate social responsibility whether you’re a building owner or a business owner. It is our responsibility to reduce that carbon footprint, right? So the technologies that you can use and we have used is, one, is AI is becoming more advanced through the advancement of sensors, right? How to collect data, to how do you bring this data into a compute environment where you apply AI to learn from and analyze this data to infer that information?

So we can be a more—automate the whole process. For example, in the past the building managers in multiple buildings—I mean, I’ve interviewed several across the industry, the facilities manager or building manager, what they would do is they would use manual processes where 8:00 to 5:00 you keep the HVAC running or keep the lights running, regardless of how the building is utilized, right? And that generated X amount of energy consumption in the buildings, right?

But once IoT became a reality over the last seven, eight years or so, we started to put sensors in there; to use daylight savings; we automated the process of using AI to see the utilization of the building. And based on the utilization, you would turn the lights on or off or HVAC on or off or water consumption—whatever it is, right? And that reduced the amount of energy used in the buildings, right?

So, small steps first, right? First, connect the unconnected, assess the data in the buildings—which is a treasure trove of data there—analyze where you can drive the energy-consumption optimization. The first place to start is lighting or HVAC. Then you go on to the other consumptions as your computers that are plugged in—or it could be your water utilities—collect all the data and start analyzing it and start optimizing where you want to start reducing the energy optimization. So it’s not just about today and pandemic and hybrid. This has always been the process ever since IoT became a reality, and AI and advanced technologies became a reality. It is very feasible. And we at Intel Corporate Services have already accomplished a huge task in reducing our carbon emissions.

Christina Cardoza: I can definitely see, with all the different systems and data coming in, the importance of AI to be able to manage and analyze all that data quickly to make business decisions. There’s also a lot of different systems outside of the buildings. You know, there’s the parking lot, parking lot lights, there’s everything inside. There may be a cafeteria. So there’s all these different systems that we want to collect data from. How do we connect all of those different systems that may not have touched each other before? Graeme, do you want to answer that one?

Graeme Jarvis: Sure, I’ll start Christina. And I’m sure Sunita has some great insight also. You hit upon a great word, “system.” I like to use a swimming pool analogy, where historically the security manager was in a lane. The facility manager was in a lane. The building manager was in another lane. And products were sold into those owners, if you will, that had a certain part of the building under their, his or her, responsibility. The way to look at this problem is really as an integrated system. So that’s why, when we talk about smart, connected, sustainable buildings, you’ve got to get the building connected, which is now happening.

And now you’ve got all of these data from these edge devices that are doing their core function—security, heating, ventilation, air conditioning, the building-management system, smart parking, smart elevators, etc. When you pull all of this together, now the benefit is you can start to figure out patterns and optimize around the heartbeat of what that building should be, given what it’s capable of doing with the equipment that’s in place and the systems that are in place. So this is a journey. This is not something that can be done overnight, but the beginning is to assess what you have. And then that’s one end of the spectrum.

And then look at where would you like to be three, four years from now from an ESG perspective. And then you have to build a plan to get from where you are to where you would like to be. That’s most of our customers’ journey today. When we do that, the assess phase is really eye opening because the data is objective, it’s no longer subjective. Well, I think this might—it’s pretty crystal clear. And then you can use AI and modeling with building twins. We have OpenBlue Twin, for example, to do “what if” scenarios: If I change this parameter, what might that do to the overall efficiency of the building? And so now you can start to harness the information that was latent, but now it’s at your fingertips. So that’s some of—that’s some of how we help our clients in that journey realization.

Sunita Shenoy: Yeah, if I can build on that, Christina, from a technology standpoint, right? In any given building there’s a disparate number of systems, right? Could be an elevator system, a water system, an HVAC system, a lighting system, a—how your computers are connected together, all of it, right? And they all come from different solutions, different companies. Our advocacy in any—we try this with multiple industries and transformations—is focus on using open standards, right? If everybody’s building on open-standard protocols, whether it’s connectivity or networking, then you are working off the same standards. So when you plug and play these different systems, you are able to collaborate with the different systems, however disparate they are, right? Share the data, bring it to a common place, information sharing on common protocols. Networking is super critical in bringing all these disparate systems together.

Graeme Jarvis: Absolutely right. For example, OpenBlue. Part of the name “OpenBlue” is “open.” We are open because no one company can do this alone. Hence, we have such a great partnership with Intel. So, open standards; we can push information to third-party systems. We can ingest information from third-party systems, all to advantage the customer for the applications that give them the outcomes they’re looking to realize. So this is actually a critical point in industry. If people that are listening to this podcast are talking to folks who have a closed architecture or a closed approach, I would just caution some pause, and think more on the open and scalable and partnership-oriented approach, because that’s where things are going. And it’s extensible with firms that are yet to present, but we would love to partner with, because they’ll have some novel capability that will advantage our customers.

Christina Cardoza: I love the point that you both made about being able to leverage some of the existing technology or systems you have within a building. I think sometimes we get a little bit too quick to jump on the latest and greatest new technologies, or to replace the systems that we do have. So it’s important to know that there are systems out there that can connect some of these disconnected systems that we have, and you don’t have to rip and replace everything. It’s still working. There are ways that you, like you mentioned Graeme, that you can work together. I want to paint the picture a little bit more for our listeners. We’ve been—there’s been a lot of great information, but I’m wondering how this actually looks in practice, especially with OpenBlue. So, Graeme, do you have any customer examples or use cases that you can provide of how OpenBlue helped a building become smarter, connected, and sustainable?

Graeme Jarvis: Sure, I have a few. I’ll share a couple. So, one is a company called Humber River Hospital. They’re out of Toronto, Canada. And what we are helping Humber River Hospital do is we’ve entered into a 30-year agreement with them to help improve their energy consumption by approximately 20 million kilowatt hours per year. And how we’re doing that is understanding their environment, layering on top our OpenBlue Solution Suite, and leveraging the built environment cadence to optimize, to refine, and then optimize around that for a multiyear engagement. So this is about a 20-year engagement.

The benefit to the client is they have a predictable financial roadmap, and they’ve got leading technology that’s going to help them realize that predictable financial outcome. And we also then help certify that they are indeed attaining those targets from a LEED standpoint and a corporate-stewardship standpoint. So that is one example.

There’s another example with Colorado State University, out of Pueblo. This is around renewable energy supplies for 100% of their energy demand on campus. And it’s a 22-acre solar array that is being completed. And then we’re overlaying our capabilities, hardware and software, and our professional services, including OpenBlue, to help them realize that 100%-green objective.

Christina Cardoza: Thanks for those examples. And I want to go back to a point you made earlier about how not one company can do this alone. Partnerships are essential to meeting our sustainability goals. So, Sunita, Graeme mentioned a couple times the importance of their partnership with Intel. So, I’m wondering what are the sustainability efforts at Intel, and how have you been working with partners like Johnson Controls to meet those goals?

Sunita Shenoy: Yeah, so there is an initiative that Intel calls RISE, which stands for responsible, inclusive, sustainable, and enabling, right? Responsible, meaning that we employ responsible business practices across our global manufacturing operations, as well as how we partner with our value chain and beyond, right? Inclusion is about advocacy for diversity and inclusion in the workforce, as well as advocacy for social-equity programs on making sure that, for example, the food is equitable in the community. The sustainability, which is the focus of smart buildings, is not just from a corporate social responsibility perspective. Our buildings and our operations, our corporate services are—have taken a commitment, which is by 2040 to achieve 100% renewable energy across global operations, as well as achieve net zero greenhouse gas emissions. And from a product standpoint, the products that Intel brings to the marketplace—our microprocessors and edges and silicon and the software—is to increase our product energy efficiency 10x from what it is today, as well as enable our value chain to employ these—this energy efficient processes so the electronic waste doesn’t contribute to the greenhouse emissions. So those are some of the things that we are doing as a corporation to address sustainability goals.

Christina Cardoza: Great. Thanks for that, Sunita. And you mentioned a couple of Intel technologies in there. Graeme, I’m wondering, you talked about the value of the partnership already with Intel, but what about the Intel technology? What are you leveraging in OpenBlue, and how has that been important to the solution and to businesses?

Graeme Jarvis: First of all, I’d be remiss if I didn’t mention, before I get into the technology, what the value Intel brings to our relationship is. It’s all about the people. Intel has a great employee base and a great culture. They’re a pleasure to work with, from their executive leaders to their field teams. So it starts with the people. So, I want to make mention of that because that’s critical. Next would be the depth of expertise that they bring to a client’s environment, especially on the IT side. This complements our world in Johnson Controls because we’re more on the OT side, but the IT and OT worlds are converging because of this connected, sustainable model we’ve been talking about in business reality.

And so between the two of us we can solve for a lot of customer challenges and outcomes they’re looking to realize that neither of us could do independently. Intel silicone hardware, their compute, their edge and AI capabilities—they really help us bring relevant solutions, either from a product standpoint, because it’s embedded with Intel compute and capability, or they actually enable some of the edge capability that we bring to our clients’ environment through OpenBlue. I also want to mention, on the cyber side, going back to the IT and the OT side, Intel has great capability on cyber IT. We’ve got great capability on the OT cyber side. When you talk to a client, they’re looking for an end-to-end solution. And so that’s another area where we’re better together and we’re better for our clients together.

Christina Cardoza: I always love that, that saying of “better together.” It is a big theme over here at insight.tech, especially working with partners like Intel. I think you gave our listeners and business owners and building owners a lot to think about as they try to meet the sustainability goals that they have. Unfortunately, we are running out of time, but before we go, I want to throw it back to each of you quickly to give any final takeaways or thoughts you want to leave our listeners with today. So, Sunita, I’ll start with you.

Sunita Shenoy: Yeah, so what I want to say is the barrier in adoption or deploying a smart building is generally not the technology, because the technologies exist, right? The solutions exist. The barriers is the people and the decision to employ the smart building solutions, right? So, we’ve learned along—we’ve learned a lot of things over the last several years since the conception of IoT and now edge computing, right? So it is very feasible to deploy. I think the mindset of people needs to shift and, as Graeme was saying, the IT and the OT worlds need to collaborate by bringing the best practices of both together to solve these deployment challenges. Look at those challenges as opportunities.

Christina Cardoza: Absolutely. And, Graeme, any key thoughts or final takeaways from you?

Graeme Jarvis: Yes. Just one, a macro one. So, it’s all around just saying that there’s a tremendous opportunity before us as we look to address the sustainability challenges that we discussed on this program. It’s global in nature, and that’s going to require global leadership at all levels to be successful. It is hard to find meaningful—work that is meaningful because it provides a good economic benefit while doing good for our planet. This call to action around buildings I think is one of those. So, if there are any people that are looking for work, I would encourage them to take a look at the smart, sustainable building sector. It is part of the new frontier. It requires a lot of different skill sets that are complementary. And if anyone is a customer on this podcast, I would encourage them to take a look at Intel’s websites for the solutions that they can afford, and take a look at Johnson Controls websites, and we would love to come and help you. So, thank you.

Christina Cardoza: Absolutely. Great final takeaway to leave us with today. And with that, I just want to thank both of you for joining the podcast. It’s been a pleasure talking to you, and thanks to our listeners for tuning in. If you like this episode, please like, subscribe, rate, review—all of the above on your favorite streaming platform. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Interactive Made Easy: Touchless Self-Service Kiosks

Self-service kiosks are on the rise in every industry, driven by demand from consumers and businesses alike. It’s easy to understand why. Customers like the convenience and interactivity of self-service kiosks. And businesses appreciate the way they streamline operations and relieve overstretched workforces.

But despite this, there remains one significant barrier to implementation. Most self-service kiosks rely on touchscreen technology—and this can be a deal-breaker for several reasons. First, there’s the issue of cost. A fleet of touchscreen kiosks or attractive large-format models represent a major investment for most businesses.

Maintenance is another concern. Post-pandemic, people are wary about touching something that so many other people have touched. For customers to feel safe, touchscreens must be cleaned frequently, which takes time and effort. In addition, a damaged all-in-one touchscreen kiosk can be costly to repair. A single dead pixel may require the replacement of an entire screen.

These are serious challenges. But advancements in edge AI and 3D computer vision are now enabling touchless self-service kiosks. This technology solves many of the issues of traditional touchscreens, which will drive adoption across multiple industries and may even usher in a new era of human-computer interaction.

Edge AI + 3D Vision = Touchless Self-Service Kiosks

At first, a “touchless” touchscreen might sound like a contradiction in terms. But the basic concept is straightforward. Touchless self-service kiosks use a deep learning-based technology known as skeleton tracking that treats the user’s hand as a mouse pointer.

Benson Lee, Chief Marketing Officer at LIPS Corporation, a manufacturer of touchless technology solutions, explains how this works:

“We create a virtual pane between the user and the display screen. An AI-enabled 3D camera tracks the user’s hand to emulate scrolling, and when their fingertip crosses the pane, it’s interpreted as a mouse click,” (Video 1).

Video 1. A touchless self-service kiosk in action. (Source: LIPS)

Of course, any system based on high-resolution 3D imagery will need to process a large amount of data—far too much to send to the cloud if you want real-time interaction. That’s why touchless displays use AI to perform their visual processing on the edge. Lee says LIPS’ technology partnership with Intel® helps make this possible:

“Intel CPUs are powerful enough to handle heavy computational workloads—and are particularly good for computer vision and edge AI applications. In addition, the Intel® oneAPI toolkit simplified the development process, allowing our engineers to write a solution driver that works on many different platforms.”

Touchless #technology makes implementing self-service #kiosks—including ones with large displays—easier and more cost effective. @LIPS_Corp via @insightdottech

Lowering the Barrier to Adoption

Significantly, touchless self-service kiosks powered by the LIPS solution are more modular than their touchscreen-based counterparts. The camera and touchless interface driver are separate from the display screen.

This means that existing touchscreen-based kiosks can easily be retrofitted by plugging a 3D AI camera into a kiosk’s USB port and installing a driver. Perhaps even more importantly, any display screen—even a non-touch display—can be made interactive with the addition of these components.

For businesses and systems integrators alike, these are attractive benefits. Touchless technology makes implementing self-service kiosks, including ones with large displays, easier and more cost effective.

In addition, maintenance is greatly simplified. “You don’t have the same burden of cleaning that you do with touch displays,” says Lee, “and if you’ve retrofitted a non-touch display screen and that screen breaks, you can replace it cost effectively—without having to replace everything else.”

From Healthcare to Hospitality

Proof in point is LIPS’ experience during the COVID-19 pandemic. For example, the company was approached by two organizations that relied on touch-based kiosks for daily operations: a quick-service restaurant chain and a local hospital.

Despite the obvious differences, the restaurant chain and the hospital had a number of things in common. They were concerned about the health and safety of the people they served. They couldn’t just stop using self-service kiosks, as these were deeply integrated into their workflows. And their staff members were already stretched thin, making it impossible to spend extra time and effort to sanitize each touchscreen after use.

To address these challenges in the restaurant, LIPS retrofitted the in-store self-ordering kiosks with their 3D camera systems, resulting in a touchless equivalent that was able to perform the same operational role in the restaurant. At the hospital, LIPS used the same technology to make the patient reception area’s touchscreen queuing system completely touchless.

At both locations, leadership was pleased with the speed of the retrofit, the ease of maintenance, and naturally, the lowered risk of transmission.

The Future of Interactivity

The LIPS restaurant and hospital deployments demonstrate why touchless technology has such a bright future. In the years ahead, expect touchless self-service kiosks to gain traction as more and more businesses wake up to their potential.

That potential is about much more than ease of implementation. Unlike many touchscreens, touchless displays are not based on capacitive sensing, so they don’t require a bare hand to operate. This means they can be used by people who wear prosthetics—a huge step forward for accessibility. It also means that people working with gloves or protective gear can use them, making the technology useful in surgical settings or industrial environments like microfabrication cleanrooms.

The range of possible use cases across many industries holds the potential to bring about a real change in the way human beings benefit from self-service systems. “Touchless technology could be as big as touchscreens were for smartphones and tablets,” says Lee. “It will make our world a better, safer, and more interactive place.”

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech

Intel® Innovation Features 12th Gen Intel® Core™ Processors

As the IoT edge matures, technical requirements mature with it. Pervasive connectivity warrants ubiquitous security. Increasing data volumes and data latency tolerances are making edge AI inferencing essential. And the rise of hyperconverged infrastructure means engineers must design solutions that accommodate both the edge and cloud.

For IoT technologists, each achievement demands another. And to ensure their winning streak continues, the annual Intel® Innovation 2022 event returns in-person on September 27 and 28.

Created by developers, for developers, Intel® Innovation combines strategic session tracks, hands-on technical labs. Attendees can also expect a partner technology showcase to demonstrate how hardware, software, and systems engineers can tackle the next phase of IoT design requirements. Content spans from AI/machine learning to open accelerated computing to security and network/edge compute, all promising valuable insights for all conference participants regardless of their end use case or where they are in the development lifecycle.

It all kicks off with a live keynote from CEO Pat Gelsinger, who will make announcements like the official reveal of the latest 12th Gen Intel® Core processor (formerly known as Alder Lake PS).

Desktop Performance, Mobile Efficiency for IoT Apps

At Intel® Innovation, attendees learn how the latest 12th Gen Intel® Core devices build on the new processor microarchitecture hybrid core innovations by blending performance and power efficiency of Intel® Core mobile processors with the LGA socket flexibility of the family’s desktop SKUs. The result is a small-footprint, multi-chip SoC that packs in as many as six Performance cores, eight Efficient cores, and 96 execution units of Intel® Iris® Xe graphics to plow through IoT edge workloads.

This can be observed in 6.6X edge AI inference and 4X graphics performance over prior-generation processors. It can also offer configuration with power consumptions as low as 12W. The 12th Intel® Core Processors also contain all the I/O to support hyperconverged, workload-consolidated embedded edge systems, and are backed by long-life availability.

Early-access partners already are taking advantage of the new benefits that come with these processors. You can expect to see 12th Gen Intel® Core Processor-based solutions in the SMS-R680E Mini-ITX board from ASUS (Figure 1), MBB-1000 ATX motherboard from IBASE Technology, X13SAV-LVDS server board from SuperMicro, among other products from Axiomtek, IEI Integration, OnLogic, and more.

ASUS mini-ITX board, which features new 12th generation Intel® Core™ Alder Lake PS processors.
Figure 1. The ASUS SMS-R680E Mini-ITX board provides optimal power, performance, and flexibility for IoT use cases. (Source: ASUS)

Edge AI Inferencing in Action

Since boards aren’t enough, many exhibitors put the new 12th Gen Intel® Core processors into power-efficient AI inferencing systems that serve a range of IoT markets and applications: 

  • Smart Retail: All-in-one POS systems based on the ASUS SMS-R680E Mini-ITX board can run multimedia and AI inferencing tasks simultaneously. And you can accelerate them even further using software like Microsoft EFLOW and the Intel® Distribution of OpenVINO Toolkit.
  • Smart Healthcare: Ultrasound imaging devices can use the improved graphics, DDR5 memory support, and PCIe 4.0 lanes on the IBASE MBB-1000, and double down with features like Intel® DL Boost to conduct AI diagnostics and run smart assistants.
  • Computer Vision: More cores, higher thread counts, and better graphics join four display pipes on these processors to form the makings of advanced CV systems. For example, the SuperMicro X13SAV-LVDS server boards can decode dozens of 4K30 video streams and efficiently conduct object detection or classification using DL Boost.

Technologies and use cases like these are highlighted not only in the product showcase but also in the event’s multiple technical session tracks.

For instance, the AI/ML track features a session from Intel and Red Hat that demos a deterministic AI appliance that leverages solutions from cnvrg.io, Habana, and the Red Hat OpenShift-based Intel® AI Analytics Toolkit. Meanwhile, in the Network and Edge Compute track, representatives from Intel and Google will discuss “Capturing, Managing, and Analyzing Real Time Data Where It Matters.”

Elsewhere, a committee from Intel, Codeplay, Julia Computing, KTH Royal Institute of Technology, and the University of Illinois at Urbana Champagne explore “Accelerating Developer Innovation Through an Open Ecosystem” in the Open Accelerated Computing session group.

Go to the session agenda to find tracks that are best for you.

If that weren’t enough, after the sessions you can start putting these IoT skills into practice by heading to the Intel® Innovation Dev Tool Shed, earning Edge AI Certifications, exploring the AI Innovation Zone, and more. With all that at your disposal, you should come away from the show with everything you need to overcome the next set of challenges IoT throws at you.

It is all right there at the San Jose Convention Center on September 27 and 28. Will you keep your momentum at full steam?

Register for the 2022 Intel® Innovation Summit today.

AI in Healthcare Advances Cancer Diagnosis

While studying advanced 3D imaging and AI in healthcare applications at Taiwan’s National Tsing Hua University, researchers hit on an exciting potential application: helping pathologists diagnose cancer tumors with greater speed and precision. They obtained licenses from the university and formed a digital imaging startup, JelloX Biotech Inc., but soon discovered hospitals were far from ready to adopt the technology.

In the age of precision medicine for cancer treatment, growth rate of well-trained pathologists is far beyond the need of diagnosis. Most pathologists still examine tissue samples by eye and take manual notes, a painstaking hours-long process. Few have made the switch to digital 2D or 3D image analysis, in part because it traditionally has required installation of costly and complicated graphics equipment.

Computer Vision in Healthcare

Despite their highly trained eyes, doctors don’t always get important details right. Tumor samples are complex—each one contains 10 to 30 parameters that must be analyzed to determine whether the cells are cancerous, how fast they are dividing, and how healthy or unhealthy they look as compared with normal tissue, among other factors.

“Studies asking multiple pathologists to analyze the same tissue sample have found 20% to 30% disagreement among the diagnoses,” says Yen-Yin Lin, Chief Executive Officer at JelloX. “This means that there is a chance that patients might receive incomplete information about their disease status, thus delaying proper treatment.”

Misdiagnosis can be very painful for patients. They might miss a good chance to use the best drug for fighting their cancer earlier or undergo chemotherapy they may not need.

To improve diagnostic capabilities without breaking the bank, Lin and his colleagues set out to create an edge solution that could quickly uncover and digest far more information than pathologists can see—without the need for installing expensive graphics equipment.

“#AI insights could help doctors improve diagnostic accuracy and develop better #treatments.” – Yen-Yin Lin, JelloX Biotech Inc. via @insightdottech

Using AI 3D Imagery in Pathology

The company found recent advancements in computer vision and AI could be used to help doctors better detect anomalies from medical images with higher accuracy. “It can assist healthcare professionals in diagnosing diseases like cancer, identifying disease progression, and predicting patient outcomes,” says Lin.

As a result, JelloX set out on a three-year development journey to create MetaLite Digital Pathology Edge Solution, which can analyze each tissue sample parameter in one to two minutes, compared with an hour using a standard computer.

To do this, JelloX needed to leverage powerful deep-learning models and annotation tools, which required equally as powerful hardware capable of deploying these models at the edge and reducing inferencing time for quick, efficient, and accurate results.

Lin explains they turned to an edge computing device powered by Intel® processors and custom AI algorithms deployed through the Intel® Distribution of OpenVINO Toolkit. This made it highly suitable for deployment on netbooks.

Intel CPUs were able to accelerate the training and inferencing significantly, provide an end-to-end deep-learning pipeline that helped JelloX apply its solutions to real use cases, and deploy their models across different hardware.

This is because OpenVINO was designed to first help to optimize deep-learning models, then deploy the model over multiple hardware devices, and accelerate the inference and performance of those platforms, Zhuo Wu, a Software Architect at Intel who works closely on OpenVINO, explained.

As a result, JelloX can now help configure most hospital scanners to work with the software, which also allows doctors to add notes as they work (Video 1).

Video 1. JelloX MetaLite Digital Pathology Edge Solution uses AI algorithms and edge processing to rapidly analyze 3D tissue samples in near-real time and allows physicians to annotate results. (Source: JelloX Biotech)

In addition to Intel CPUs, JelloX also leverages Intel® NUC based on the 11th Gen Intel® Core processors, which enable engineers to easily scale their solutions.

Pathologists can choose to review some parameters in real time and save others for later. Data from the scanner and edge device is sent to hospital servers, where hundreds of parameters can be analyzed with AI in detail overnight, with results ready to view the next morning.

AI models are trained on massive data sets accumulated from many sources. The amount of information they work with is too vast for humans to assimilate, but algorithms can quickly process it and use it to classify tissue samples and make inferences and predictions about the course of the disease.

“The interpretation of immunohistochemistry staining is a time-consuming and expensive process in pathological examinations, requiring significant time from physicians. If auxiliary tools can be utilized to improve efficiency, it can bring about the greatest economic benefits. Some parameters are difficult for doctors to categorize conclusively. When AI does calculations, it gives doctors a scale or digital ruler to use as they judge the images.”

AI insights could also help doctors improve diagnostic accuracy and develop better treatments, Lin believes, saying, “If we have good AI-assisted tools, maybe patient survival rates and survival duration will be enhanced.”

AI analysis is also valuable to medical researchers, allowing them to discover new features of cancer cells and better understand how they operate. “Algorithms can dig out more information from images and do the tough analysis, providing more information about morphology and protein biomarker features,” Lin says.

Being able to gain more efficient and accurate results with AI not only helps doctors improve patient care and service but also reduces the time and effort they need to spend on each case—which in turn allows them to take better care of more people, according to Wu.

Currently, researchers at Taipei Veterans General Hospital and MacKay Memorial Hospital in Taiwan are using MetaLite to identify new biomarkers of cancerous tissue and calculate the area of tumors with greater precision. Once the platform receives approval from Taiwanese health authorities, the hospitals may use it as a diagnostic tool.

Pharmaceutical companies may also benefit from AI tissue analysis, using it to identify which patients stand the best chance of benefitting from medications set to undergo clinical trials, especially for those requiring biomarker-guided patient screening.

Expanding AI in Healthcare with Federated Learning

As hospitals expand the use of AI in pathology, data they obtain will be used to train future AI models, increasing accuracy. And through a process known as federated learning, hospitals can now securely share image data with others while confining sensitive patient information to their own servers—a capability once considered an impossible dream. JelloX is developing a new version of its software that enables federation.

“With federated learning, data will accumulate much faster, improving the AI and increasing speed and data uniformity,” Lin says. “Using AI in pathology will drive precision medicine, helping doctors improve diagnosis and treatment, and allowing pharmaceutical companies to develop new drugs much faster.”

In fact, in its immunohistochemistry imaging solution, the company is already leveraging Intel’s open-source framework Open Federated Learning (OpenFL) to enable seamless cross-institutional analysis of images with many of its customers.

“AI is becoming more prevalent in the healthcare space due to its immense potential to revolutionize healthcare delivery, improve patient outcomes, and enhance operational efficiency,” Lin adds.

Beyond federated learning, AI is also coming to healthcare in the form of chatbots and virtual assistants—enhancing patient engagement and support. Using natural language processing, conversational AI chatbots can help collect accurate patient information so doctors and nurses can focus better on patient care, according to Wu.

To learn more about developing healthcare AI solutions, check out these notebooks: Quantize a Segmentation Model and Show Live Inference and Part Segmentation of 3D Point Clouds with OpenVINO.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

This article was originally published on September 22, 2022

Bringing Intelligent AI into the Physical World

The ongoing supply chain crisis has brought the workings of the nation’s cargo transportation system out of the shadows and into the mainstream news. It’s pretty apparent that a system that had functioned well for decades was not exactly up to the challenge the past couple of years threw at it. Modernization, digital transformation, and AI innovation was—and still is—desperately needed, or the whole country feels the consequences.

That’s where Scott Chmiel, Business Development Manager for Cloud Solutions at Arrow, a technology solutions provider; and Steen Graham, Co-Founder and CEO of Scalers.ai, come in. Between their two companies, they help customers navigate the intelligent IoT partner ecosystem—and not only in the environment of smart ports. Deploying AI in the physical world is applicable to all kinds of industries. And the benefits go beyond business to have a ripple effect on society at large.

What challenges do businesses currently face in their digital transformation efforts?

Scott Chmiel: The challenges have changed because the complexity of solutions has increased so much. In the past, everything was contained in a single piece of hardware or software, but now we’re adding cloud, we’re adding complexity, we’re adding technologies that not only require more from a tech standpoint but different skill sets from a development standpoint. Solutions now have to be integrated and deployed into existing customer environments that differ from one to another. Connected devices now require additional operational security. And, obviously, we can do things that weren’t possible before, such as machine learning and AI. It’s possible to solve business problems that we couldn’t even address in the past.

Steen, what is your perspective on these efforts?

Steen Graham: The challenge is deploying artificial intelligence and IoT in the physical world. Take the situation of a port. Obviously ports, and the infrastructure for ports, have been around for decades, and there are various existing applications that are working just fine there, but then you want to implement new technology. So how do you actually deploy these cloud-native methodologies—including artificial intelligence—on the existing infrastructure to do things like analyze efficiency and monitor CO2 emissions? Combining existing infrastructure with new infrastructure, from both a hardware and a software perspective, is critical to driving industry transformation and addressing the challenges in our supply chain.

The current federal government administration has been fantastic in supporting port modernization. But, interestingly, ports are actually managed by their local municipalities, so what those local leaders do has impacts on a national scale. Unions are also critically important to the situation. For example, one of the port jobs that has been sustained in the United States is crane operations. What we’ve automated is the front-end part—removing the containers from the ship—but we still have heavy investment in these human-performed, union-based roles in loading and unloading the trucks. So those three parties: The federal government, the local municipalities, and the unions are all incredibly important in this current crisis.

How do businesses go about making impactful technology changes?

Scott Chmiel: The first step is understanding what business outcome they’re seeking. What are they trying to accomplish, and who are the stakeholders? In the example of the Port of Los Angeles, there’s not just one company; there’s the municipality, the people handling the containers, the truck drivers, dozens if not hundreds of subcontractors—who all have to dance around each other to run the port. Our solution focuses on their challenges around safety, as well as just tracking in and out.

Combining existing #infrastructure with new infrastructure, from both a hardware and a software perspective, is critical to driving #IndustryTransformation and addressing the challenges in our #SupplyChain. @Arrow_dot_com and Scalers.ai via @insightdottech

Steen Graham: To answer the second part of the question, what Scott and I looked at was a no-compromise solution. From a simplistic, operating-system perspective, there are two pervasive operating systems in the world: Windows and Linux. Cloud-native workloads in modern AI applications are written in Linux, whereas a lot of existing workloads and applications have been written in Windows. By adding cross-platform capabilities to some of these technologies we’ve been able to retrofit the AI applications on existing infrastructure to make sure they work better together. Layering on modern cloud-native attributes and AI capabilities was really the approach we used in this particular solution.

What’s driving this cross-platform interoperability?

Scott Chmiel: Often it’s the existing hardware. And the technology, the infrastructure, can be applied to many different solutions—whether it’s a retail application, within a smart port, or in a warehouse—all the same types of challenges are there, and the same technology can be used and customized or repackaged. It brings additional value to the existing hardware they have, and adds value to it with things they couldn’t do before. In the example with the smart port, it was adding safety, and that’s applicable to retail, too: Before a crane moves through a warehouse, you want to make sure wherever it’s going is clear of people who might be in its way.

Steen Graham: From a technical point of view, we were given a gift—notably by Microsoft and Intel®—with the underlying technology. We use the acronym EFLOW, for Edge for Linux on Windows—or, more accurately, Azure IoT Edge for Linux on Windows. That is what gives us that no-compromise capability across Windows and Linux. And the hidden gem there is that Intel invested in hardware-acceleration capabilities via its integrated-graphics capability that allow us to do these workloads on deployed Intel-based CPUs without having to upgrade to expensive GPS. Now we can run multiple AI models, multiple camera feeds on affordable, off-the-shelf technologies like Intel’s NUC platform, and Windows and Linux as well. It’s an incredible array of technology that allows us to deploy these modern workloads and make sure they’re interoperable with existing infrastructure.

How is EFLOW used in the port example?

Steen Graham: The EFLOW technology was only released late last year, so we are still in the engagement phase. From a business-outcome perspective, the problem that we were trying to address was the bottleneck associated with turn times: the operational-technology metric of how fast containers can be loaded and unloaded. So how do we optimize the turn times of those cranes? How fast can they be loaded and unloaded? How do we make sure the truck is in the right place at the right time? All while providing an enhanced safety experience for the workers on-site. And we are also tracking CO2 emissions, so another metric we’re looking at is how efficient the hybrid cranes are that many ports are using alongside their diesel cranes.

What other use cases or challenges might EFLOW solve?

Scott Chmiel: There are lots of opportunities: Transportation, industrial, and retail are a few different verticals. I know there’s a strong focus on retail from both Microsoft and Intel: The opportunities are there to do workload consolidation—consolidation of surveillance and point of sale, where one machine could do both. Or there could be new services that couldn’t be done before; once you have a visual element with the transaction, what kind of value can you generate out of that?

The code, the underlying technology, can be repurposed for any of those verticals. A lot of the work has already been done for them with the accelerators and the tools that Microsoft and Intel with OpenVINO have provided.

Steen Graham: Healthcare is another possible industry. If you look at medical-imaging equipment, such as ultrasound, a lot of ultrasound vendors are Windows-based applications, but they’re looking to add new AI-based features. An example is that anesthesiologists occasionally have challenges finding veins in their patients. You could use ultrasound equipment to determine with accuracy the location of the vein. You take existing Windows-based ultrasound equipment, and then overlay modern deep-learning.

We’ve also seen an incredible demand in using computer vision to do defect detection in the manufacturing process, and I think that’s an incredible use case. If you do in-line AI defect detection, you can find the products that are having quality issues earlier in the manufacturing flow. And if you address those problems earlier in the flow, you actually end up using less fossil fuel to run through the rest of the process.

Can you talk about the partnerships that go into this process?

Steen Graham: Arrow is always looking to figure out how it can make one plus one equal three across its partnerships. So Scott came to us with an incredible idea about showcasing the value of this underlying EFLOW technology, and we were able to take technologies from Intel and Microsoft—and a number of open-source projects as well—to build that solution code. Where Scalers comes in is in really understanding how to fit all those things together into a high-fidelity enterprise AI solution, and then providing that solution, as well as building the custom AI models for deployment.

Scott Chmiel: Arrow calls itself an orchestrator and aggregator—whether it’s bringing the different technologies, services, or components together, or helping out with design. It’s hard for one company that has a vision or a challenge to have the all resources or the skill sets in-house to do everything for an end-to-end solution. So what Arrow looks to do is work with that end user and bring in appropriate partners. We help them pick the right solutions, not only for their end use but looking at the longevity, the overall life cycle, of that solution as well. Smart ports—that’s not something that’s going to be deployed and done within the course of a couple of years. And it should also be something that’s repeatable. The company that’s developing that solution, or that is bringing these pieces together, can reuse it and create more scale, create more value across the ecosystem. 

Is there anything else we should know about EFLOW or this topic?

Steen Graham: I think as we talk about the cost of development and software engineering, it’s incredibly important that we write the code to integrate these partnerships. There are so many incredible companies with great technologies, but what many times is missing is the single line of code that connects the APIs to really drive transformation. As an industry, we really have to come together on the deployment challenge, because building capabilities in the cloud is fantastic, and it’s really affordable and easy to do these days. Where the challenge occurs is deploying it in the physical world, and the continuous learning, the transfer learning, the continuous annotation requirements to do that.

And, finally, although we’re getting really good at synthetic data and creating AI models with small data sets, if we really want to move society forward, we have to be able to build models with high fidelity on good data sets. And we have to do it with explainable AI, so that we know why it’s making its determinations in order to make sure it’s as inclusive as possible, as well as accurate.

Scott Chmiel: I’m always amazed when I talk to companies in specific verticals—whether it’s somebody running a warehouse, somebody in a port, somebody in surveillance or the medical industry—the amount of knowledge they have about what they do. Their particular solutions are amazing. And as these solutions get more complex, I want to make sure people understand that there’s no need to go it alone. We’re no longer in the days of building a device that does one thing—it’s not just an MRI that does visioning; it’s how it integrates with the whole hospital. But companies don’t need to figure that out alone. And they really can’t do it alone with these more complex solutions. The bar is moving down for what can be done; it’s amazing the business solutions that couldn’t be solved in the past that now can be.

Related Content

To learn more about EFLOW, listen to the podcast Fast Track Innovative Apps: With Arrow and Scalers.ai. For the latest innovations from Arrow, follow them on Twitter at @Arrow_dot_com and LinkedIn at Arrow-Electronics.

 

This article was edited by Erin Noble, copy editor.

Machine Learning Simplified: With MindsDB

Machine learning is no longer just for the AI experts of the world. With ongoing initiatives to democratize the space, it’s for business users and domain users now, too. Users no longer need to have any programming language knowledge to build and deploy machine learning models.

But democratizing machine learning does not mean data scientists and machine learning engineers are now obsolete. When machine learning becomes simplified, it means less time spent  acquiring the data, transforming the data, cleaning the data, and preparing the data to train and retain models. Instead, they can focus on the core aspects of machine learning like unlocking valuable data and enabling business results.

In this podcast, we talk to machine learning solution provider MindsDB about why machine learning is crucial to a business’ data strategy, how democratizing machine learning helps AI experts, and the meaning of in-database machine learning.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guest: MindsDB

Our guest this episode is Erik Bovee, Vice President of Business Development for MindsDB. Erik started out as an investor in MindsDB before taking a larger role in the company. He now helps enable sophisticated machine learning at the data layer. Prior to MindsDB, he was a Venture Partner and Founder of Speedinvest as well as Vice President and Advisory Board Member at the computer networking company Stateless.

Podcast Topics

Erik answers our questions about:

  • (2:34) The current state of machine learning
  • (7:07) Giving businesses the confidence to create machine learning models
  • (8:48) Machine learning challenges beyond skill set
  • (11:24) Benefits of democratizing machine learning for data scientists
  • (13:39) The importance of in-database machine learning
  • (17:22 ) How data scientists can leverage MindsDB’s platform
  • (19:37) Use cases for in-database machine learning
  • (23:35) The best places to get started on a machine learning journey

Related Content

For the latest innovations from MindsDB, follow them on Twitter at @MindsDB and on LinkedIn.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Transcript

Christina Cardoza: Hello, and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Associate Editorial Director of insight.tech. And today we’re talking about machine learning as part of your data strategy with Erik Bovee from MindsDB. But before we jump into the conversation, let’s get to know our guest. Erik, welcome to the show.

Erik Bovee: Thank you, yeah, it’s great to be here.

Christina Cardoza: What can you tell us about MindsDB and your role there?

Erik Bovee: So MindsDB is a machine learning service, and I’ll get into the details. But the goal of MindsDB is to democratize machine learning, make it easier and simpler and more efficient for anybody to deploy sophisticated machine learning models and apply them to their business. I’m the Vice President of Business Development, which is a generic title with a really broad role. I do—I’m responsible for our sales, but that’s kind of a combo of project management, some product management, kind of do everything with our customers. And then a really important aspect that I handle are our partnerships. So one of the unique things about MindsDB is that we enable machine learning directly on data in the database. So we connect to a database and allow people to run machine learning, especially on their business data, to do things like forecasting, anomaly detection. So I work with a lot of database providers, MySQL, Cassandra, MariaDB, Mongo, everybody. And that’s one of the key ways that we take our product to market: working with data stores and streaming brokers, data lakes, databases, to offer machine learning functionality to their customers. So I’m in charge of that. And also work with Intel®. Intel’s provided a lot of support. They’re very close with MariaDB, who’s one of our big partners, and Intel also provides OpenVINO™, which is a framework which helps accelerate the performance of our machine learning model. So I’m in charge of that as well.

Christina Cardoza: Great. I love how you mentioned you’re working to democratize machine learning for all. I don’t think it’s any surprise to businesses out there that machine learning has become a crucial component of a data management strategy, especially, you know, when all the influx of data is coming from all of these IoT devices, it’s difficult to sift through all of that by yourself. But a challenge is that there’s not a lot of machine learning skills to go around for everybody. So I’m wondering, let’s start off the conversation, if you can talk about what the state of machine learning adoption looks like today.

Erik Bovee: Yeah, I mean, you summed up a couple of the problems really well. The amount and the complexity of data is growing really quickly. And it’s outpacing human analytics, and even algorithmic-type analytics, traditional methods. And also, machine learning is hard. You know, finding the right people for the job is kind of difficult. These resources are scarce. But in terms of the state of the market, there are a couple of interesting angles. First, the state of the technology itself, and core–machine learning model, is amazing. You know, just the progress made over the last five to ten years is really astonishing. And cutting-edge machine learning models can just solve crazy hard real-world problems. If you look at things like what OpenAI has done, with their large GPT-3 large language models, which can produce human-like text or even consumer applications, there’s a—you’ve probably heard of Midjourney, which you can access via Discord, which, based on a few key words, can produce really sophisticated, remarkable art. There was a competition—I think it was in Canada recently—that a Midjourney-produced piece won, much to the annoyance of actual artists. So the technology itself can do astonishing things.

From an implementation standpoint though, I think the market has yet to benefit broadly from this. You know, even autonomous driving is still more or less in the pilot phase. And the capabilities of machine learning are amazing in dealing with big problem spaces—dynamic, real-world problems, but adapting these to consumer tech is a process. And they’re just—there are all kinds of issues that we’re tackling along the way. One is trust. You know, not just, can this thing drive me safely? But then also, how do I trust that this model’s accurate? Can I put my—the fate of my business on this forecasting model? How does it make decisions? So those are, I think those are important aspects to getting people to implement it more broadly.

And then I think one of the things that’s really apparent in the market, and as I’m dealing with customers, are some of the hurdles to implementation. So, cutting-edge machine learning resources are rare, which we said, but then also a lot of the simpler stuff, like machine learning operations, turns out to be more of a challenge, I think, than people anticipated. So the data collection, the data transformation, building all the infrastructure to do this ETL link data, extracting, transforming it, loading it from your database into a machine learning application, and then maintaining all this piping and all these contingencies. Model serving is another one. Your web server is not going to cut it when you’re talking about large machine learning models for all kinds of technical reasons. And these are all being solved piecemeal as we speak. But the market for that is in an early stage. Those are dependencies that are really important for broad adoption of machine learning.

But there are a few, I would say there are a few sectors where commercial rollout is moving pretty fast. And I think they’re good bellwethers for where the market is headed. Financial services is a good example and has been for a few years. Big banks, investment houses, hedge funds, they’ve got the budgets and the traditional approach to hiring around a good quant strategy. They’re moving ahead pretty quickly, and often with well-funded internal programs. Those give them a really big edge, but they’ve got the money to deploy this, and it’s this narrow business advantage for things like forecasting, algorithmic trading are tremendously important to their margins. So I’ve seen a lot of progress there. But a lot of it is also throwing money at the problem and kind of solving internally these MLOps questions, not necessarily applicable to the broader market.

The next are, I would say, industrial use cases. You had mentioned IoT. That’s where I see a lot of progress as well, especially in things like manufacturing. For example, taking tons of high-velocity sensor data and doing things like predictive maintenance. You know, what’s going to happen down the line? When will this server overheat, or something? That’s where we’ve seen a lot of implementation as well. I think those sectors, those market actors are clearly maturing quickly.

Christina Cardoza: So, great. Yeah. I want to go back to something you said about trust. Because I think trust goes a little bit both ways. Here you mentioned how businesses have to trust the solution or the AI to do this correctly and accurately. But I think there’s also a trust factor that the person deploying the machine learning models or training the machine learning models, knows what they’re doing. And so, when you democratize AI, how can business stakeholders be confident and comfortable that a business or an enterprise user is training and creating these models and getting the results that they’re looking for?

Erik Bovee: Yeah. I think a lot of that starts with the data. Really understanding your data, make sure there aren’t biases. Explainable AI has become an interesting subject over the last few years as well. Looking at visualizations, different techniques like Shapley values or counterfactuals to see where, how is this model making decisions? We did a big study on this a few years back. Actually, one of the most powerful ways of getting business decision makers on board and understanding exactly how the model operates—which is usually pretty complex even for machine learning engineers, once the model is trained what the magic is that’s going on internally is not always really clear—but one of the most powerful tools is providing counterfactual explanations. So, changing the data in subtle ways that you get a different decision. Maybe the machine learning forecast will change dramatically when one feature in the database or a few data points, or just a very slight change, and understanding where that threshold is. It’s like, here’s what’s really triggering the decision making or the forecasting on the model in which columns or which features are really important. If you can visualize those, it gives people a much better sense of what’s going on and how the decisions are weighted. That’s very important.

Christina Cardoza: Absolutely. So, I’m also curious, you know we mentioned some of the challenges, a big one being not enough skill set or data scientists available within an organization, but I think even if you do have the skills available, it’s still complex to train machine learning models or to deploy these two applications. So can you talk about some of the challenges businesses face beyond skill set?

Erik Bovee: Interestingly—so, skill set is one, but that’s, I think that will diminish over time. There are more and more frameworks that allow people to get access, just data analysts or data scientists to get access to more sophisticated machine learning features that AutoML has become a thing over the past few years. And you can do a lot, you can go a long way with automobile frameworks, like DataRobot or H2O. What is often challenging are some of the simple things, some of the simple operational things in the short term, on the implementation side. You know, a lot of the rocket science is already done by these fairly sophisticated core machine learning models, but a huge amount of a data scientist’s or ML engineer’s time is spent on data acquisition, data transformation, cleaning the data and coding it, building all the pipeline for preparing this data to train and retrain a model. Then maintaining that over time.

You know, the data scientist tool set is often based on Python, which is where a lot of these pipelines are written. Python’s not necessarily, arguably not very well adapted to data transformations. And then what happens, you’ve often got this bespoke Python code written by a data scientist, and maybe things that are being done in a Jupyter Notebook somewhere, then it becomes a pain to update and maintain. What happens when your database tables change? Then what do you do? You’ve got to go back into this Python and it’s all reliant on this one engineer to kind of update everything over time. And so they—that’s, I think, the MLOps side is one of the biggest challenges. How do you do something that is efficient and repeatable and also predictable in terms of cost and overhead over time? And that’s something that we’re trying to solve.

And one of the theories behind that, behind our approach, is just to bring machine learning closer to the data and to use existing tools like SQL to do a lot of this stuff. They were very—SQL’s pretty well adapted to data transformation and manipulating data—that’s what it’s designed for. And so why not find a way where you can apply machine learning directly, via connection to your database, and use your existing tools, and not have to build any new infrastructure. So I think that’s a big pain point—one of the bigger bottlenecks that we’re trying to solve actively.

Christina Cardoza: So, you touched on this a little bit, but I’m wondering if you can expand on the benefits that the data scientists will actually see if we democratize machine learning. How can they start working with some of those business users together on initiatives for machine learning?

Erik Bovee: Yeah. So one of our goals is to give data scientists a broader tool set, and to save them a lot of time on the operational, the cleanup and the operational tasks that they have to perform on the data, and allow them really to focus on core machine learning. So the philosophy of our approach—we take a data-centric approach to machine learning. You’ve got data sitting in the database, so why not bring the machine learning models to the database, allow you to do your data prep, to train a model. Let’s say, for example, in an SQL-based database, using simple SQL with some modifications as SQL syntax from the MindsDB standpoint. We don’t—we’re not consuming database resources; you just connect MindsDB to your database. We read from the database, and then we can pipe machine learning predictions, let’s say business forecasts, for example, back to the database as tables that can then just be read like your other tables.

The benefit there for data analysts and any developer who’s maybe building some application on the front end that wants to make decisions, algorithmic trading, or, you know, anomaly detection. You want to send up an alert when something’s going wrong, or you just want to visualize it in a BI tool like Tableau is that you can use the existing code that you’ve got. You simply query the database just like you have from another application. There’s no need to build a special Python application or connect to another service. It’s simply there. And you access it just like you would access your data normally. So that’s one of the business benefits, is that it cuts down considerably on the bespoke development, is very easy to maintain in the long term, and you can use the tools you already have.

Christina Cardoza: So you mentioned you’re working to bring machine learning closer to the data, or bringing machine learning into the database. I’m wondering, is this how, traditionally, machine models have—machine learning models have been deployed, or is there another way of doing it? So, can you talk about how that compares to traditional methods—bringing it into the database versus the other ways that organizations have been doing this?

Erik Bovee: So, traditionally machine learning has been approached like people would approach a PhD project, or something. It’s, you would write a model using an existing framework like TensorFlow or PyTorch, usually writing a model in Python. You would host it somewhere, probably not with a web server there are Ray and other frameworks that are well adapted to model serving. And then you have data you want to apply. It might be sitting all over the place, and maybe it’s in a data lake, some in Snowflake, some is in MongoDB, wherever. You write pipelines to extract that data, transform it. You often have to do some cleaning, and then data transformations and encoding. Sometimes you need to turn this data into a numerical representation, to a tensor, and then feed it into a model, train the model. The model will spit out some predictions, and then you have to pipe those back into another database, perhaps, or feed them to an application that’s making some decisions. So that would be the traditional way. So you can see there’s a bespoke model that’s been built. There’s a lot of bespoke infrastructure, pipelines, ETLA that’s been done. That’s the way it’s been done in the past.

With MindsDB what we did is we have two kind of—MindsDB has two components. One is a core suite of machine learning models. There’s an AutoML framework that does a lot of the data prep and encoding yourself. And we’ve built some models of our own, also built by the community. But I forgot to mention MindsDB is a large, one of the largest machine learning open source projects. We have close to 10,000 GitHub stars. And there’s a suite of machine learning models that are adapted—regression models, gradient boosters, neural networks, all kinds of things that are adapted to different problem sets. MindsDB can make a decision looking at your data what model best applies and choose that.

The other piece of this core, this ML core of MindsDB, is that you can bring your own model to it. So if there’s something you like particularly—Hugging Face, which is like an NLP model, language processing model—you can actually add that to the MindsDB ML core using a declarative framework. So, back in the day you would have to, if you wanted to make updates to a model or add a new model, you’d have to root around in someone else’s Python code. But we allow you to do this to select models—select the model you want, bring your own model. You can tune some of the hyper-parameters, some things like learning rate, or change weights and biases using JSON, using human-readable format. So it makes it much easier for everybody to use.

And then the other piece of MindsDB is the database connector—a wrapper that sits around these ML models and provides a connection to whatever data source you have. It can be a streaming broker, Redis, Kafka. It can be a data lake like Snowflake. It can be an SQL-based database where MindsDB will connect to that database, and then using the natural—using the query language, native query language, you can tell MindsDB, “Read this data and train a predictor on this view or these tables or this selection of data.” MindsDB will do that and then it will make the predictions available. Within your database you can query those predictions just like you would a table. So it’s a very, very different concept than your traditional kind of homegrown, Python-based machine learning applications.

Christina Cardoza: And it sounds like a lot of the features that MindsDB is offering with its solution, data scientist can go in themselves and expand on their machine learning models and utilize this even more. So if you do have a data science team available within your organization, what would be the benefit of bringing MindsDB in?

Erik Bovee: This is the thing that I think it’s important to make really clear. We are not replacing anybody, and it’s not really an AutoML framework. It allows for far more sophisticated application machine learning than just a tool that gives you in a good approximation of what a hand-tuned model would do. So it basically, for a machine learning engineer or a data scientist internally, MindsDB, we would just save a tremendous amount of, you know, that 80% of their work that goes into data wrangling. Cleaning, transforming, and coding. They don’t have to worry about that. They can really focus on the core models, selecting the data they want to train from, and then building the best models, if that suits them, or choosing from a suite of models that work pretty well within MindsDB, and then also tuning those models. A lot of the work goes into kind of adapting, changing the tuning, the hyper-parameters of a model to make sure you get the best results, make that much simpler; you can do that in a declarative way rather than rooting around in someone’s Python code. So the whole thing is about time savings, I think, for data scientists.

And then, in the longer term, if you connect this directly to your database, what it means is you don’t have to maintain a lot of the ML infrastructure that up until now has been largely homegrown. If your database tables change, you just change a little bit of SQL—what you’re selecting and what you’re using to train a predictor. You can set up your own retraining schema. There are just lots and lots of operational time- and cost-saving measures that come with it. So it allows data scientists and machine learning engineers really to focus on their core job and produce results in a much faster way, I think.

Christina Cardoza: Great. Yeah. I love that point you made that it’s not meant to replace anybody or data science per se, but it’s really meant to boost your machine learning efforts and make things go a little bit smoother.

Erik Bovee: Yeah. In a nutshell, it just saves a data scientist tons of time and gives them a richer tool set. That’s—that was our goal.

Christina Cardoza: So, do you have any customer examples or use cases that you can talk about?

Erik Bovee: Yeah, tons. I mean, we concentrate. They fall into two buckets. We really focus on business forecasting, often on time-series data. And time-series data can be a bit tricky even for seasoned machine learning engineers, because you’ve got a high degree of cardinality. You’ll have tons of data, let’s say, where there are many, many unique values in a column, for instance—by definition that’s what a time series is—and if you could imagine you’ve got something like a retail chain that has maybe thousands of SKUs, thousands of product IDs across hundreds of retail shops, right? That’s just the complex data structure, and trying to predict what’s going to sell well—maybe a certain SKU sells well in Wichita, but it doesn’t sell well in Detroit. And how do you predict that? That’s a sticky problem to solve because of the high degree of cardinality in these large, multi-variate time series. But it also tends to be a very common type of data set for business forecasting. So we’ve really focused our cutting-edge large models on time-series forecasting. So that’s what we do. We will tell you what your business is going to look like in the future, in weeks or months.

The other thing that we see in the use cases—so it’s forecasting on time series, and then also anomaly detections. So it’s fraudulent transactions, or, is this machine about to overheat. Getting down into the details, I can tell you, across the board, all kinds of different use cases. One very typical one is for a big cloud service provider. We do customer-conversion prediction. They have a generous free-trial tier, and we can tell them with a very high degree of accuracy based on lots of columns in their customer data store and lots of different types of customer activity and the structure of their customer accounts who’s likely to convert to paying tier and when. And precisely when, which is important for their business planning. We’re working with a large infrastructure company, Telco, on network planning, capacity planning. So we can, we can predict fairly well where network traffic is going to go, and where it’s going to be heavy and not, and where they need to add infrastructure.

We’ve also worked on—this is a typical IoT case—manufacturing process optimization and semiconductor. So we can look at in real time sensor data coming in from the semiconductor process. And we can say, when do you stop and go on to the next phase of the process, and where default’s also likely to arise based on some anomaly detection on the process. That’s one we’ve seen working on one project in particular, but we’ve seen a couple like that in pilot phases. Been doing credit scoring real estate, like, payment-default prediction, as well as part of the business forecasting. So, those are all typical, and we, across the board, we see forecasting problems on time series.

One of the actually most enjoyable projects, it’s unique and interesting, but it’s really close to my heart, is we’re working with a big esports franchise building forecasting tools for coaching for video games. For professional video game teams. Like, how would you—what can you predict what the other team’s going to do for internal scrimmages and internal training for their teams? And what would be the best strategy given a certain situation on some complex, like MOBA games, like League of Legends or Dota 2? So that’s something we’re working on right now. They’ve already built the tools in the front end of these forecasting tools. And they’re—we’re working with very large data sets, proprietary data sets of internal training data to help them optimize their coaching practices. It’s an exotic case, but I guarantee you that’s going to grow in the future. So that’s one of the most interesting ones.

Christina Cardoza: So, lots of different use cases in ways that you can bring these capabilities into your organization efforts. But I’m wondering, in your experience, what is the best place to start on democratizing machine learning for a business? Where can businesses start using this? And where do you recommend they start?

Erik Bovee: Super easy. “Cloud.mindsdb.com.” It’s—we have a free-trial tier. It’s super easy to set up and get you signed up for an account. And then we have—God knows how many, 50-plus data connectors. Wherever your data’s living, you can simply plug in MindsDB and start to run some forecasting and do some testing and see how it works. I mean, you can take it for a test drive immediately. I would—that’s one of the first things that I would recommend that you do. The other thing is you can join our community. If you go to MindsDB.com, we’ve got a link to our community Slack and to GitHub, which is extremely active. And there you can find support and tips. And if you’re trying to solve a problem, almost guaranteed someone solved it before and are available on the Slack community.

Christina Cardoza: Great. I love when projects and initiatives have a community behind it, because it’s really important to learn what other people have been doing and to get that outside support or outside thinking that you may not have been thinking about. And I know you mentioned, Erik, in the beginning, you guys are also working with Intel on this. I should mention the IoT chat and insight.tech as a whole are sponsored by Intel. But I’m curious how you are working with Intel and what the value of that partnership has been.

Erik Bovee: Yeah, so that’s actually been—Intel has been extremely supportive on a number of fronts. So, obviously, Intel has a great hardware platform, and we have implemented their OpenVINO framework, whichoptimizes machine learning for performance on Intel hardware. So make great performance gains that way. And on top of that, they just, Intel provides tons of technology and kind of go-to-market opportunities. We work with them on things like this. I’ll be presenting at the end of the month, if anybody wants to come check us out at Intel Innovation in San Jose, I think it’s on the 27th, 28th, 28th, 29th of this month at the San Jose Convention Center. And we’ll have a little booth in the AI/ML part of their innovation pavilion. And I’ll be demoing how we work, running some machine learning on data in MariaDB, which is an Intel partner. Actually MariaDB introduced us to Intel, and that’s been really fruitful. Their cloud services are hosted on Intel. So if anybody wants to come and check it out, that’s—Intel has provided us this forum. So they’re—we’re extremely grateful.

Christina Cardoza: Perfect, and insight.tech will also be on the floor at Intel Innovation. So, looking forward to that demo that you guys have going on there at the end of the month. Unfortunately, we’re running towards the end of our time. I know this is a big, important topic and we could probably go on for a long time, Erik, but before we go, are there any final key thoughts or takeaways you want to leave our listeners with today?

Erik Bovee: I would just urge you to go—I mean, we love the feedback, even if you’re, you know—go test it out. It’s actually fun. MindsDB is pretty fun to play with. That’s how I got involved. I discovered MindsDB by chance and installed it and started using it and found it was just useful in all kinds of data sets and just doing science experiments. We love it. If you take it for a test drive and provide feedback on the community Slack, we’re always looking for product improvements and people to join the community. And so we’d really welcome that—“ cloud.mindsdb.com.” And thanks very much for the opportunity, Christina.

Christina Cardoza: Yep, of course. Thank you so much for joining the podcast today. It’s been a pleasure talking to you. And thanks to our listeners for tuning in. If you like this episode, please like, subscribe, rate, review, all of the above on your favorite streaming platform. And until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.