Inside the Latest Intel® Processors with ASRock Industrial

Kenny Chang

[podcast player]

Calling a CPU “revolutionary” is a big claim—but the 12th Generation Intel® Core processors have a lot of features to back up that boast. From an all-new hybrid architecture to dramatically better graphics, the latest Intel® processors can be used for high-performance AI, workload consolidation, and so much more.

Join us as we explore the most exciting new IoT features, why they matter, and how industries are already leveraging the new processors, in this podcast episode with ASRock Industrial.

Our Guest: ASRock Industrial

Our guest this episode is Kenny Chang, Vice President of System Product BU at ASRock Industrial, a leading industrial computer provider. Kenny has a wide range of experience covering server, edge AIoT, embedded computer, hardware and software technologies, as well as leadership positions in product and engineering management. Before joining ASRock Industrial, he was the Vice President of Product Development at AEPX Global, and Director of IoT Business Development at Compal.

Kenny answers our questions about:

  • (1:52) The most exciting features of the 12th Generation Intel® Core processors
  • (3:19) Why this release has the potential to revolutionize IoT applications
  • (10:13) How companies can benefit from the GPU upgrade
  • (13:32) The software capabilities of the new core processors
  • (16:07) How ASRock is helping customers quickly take advantage of the new features
  • (20:13) The power of Intel® to deliver and support development efforts
  • (22:25) How companies are already using the latest Intel® Core processors
  • (24:35) The importance of the new hardware for security features

Related Content

To learn more about the 12th Generation Intel® Core Desktop and Mobile processors, read CES 2022: Intel® Launches Revolutionary CPU Architecture. For the latest innovations from ASRock Industrial, follow them on LinkedIn at ASRock-Industrial.

 

This podcast was edited by Christina Cardoza, Senior Editor for insight.tech.

 

Apple Podcasts  Spotify  Google Podcasts  

Transcript

Kenton Williston: Welcome to the IoT Chat, where we explore the trends that matter for consultants, systems integrators, and enterprises.  I’m Kenton Williston, the Editor-in-Chief of insight.tech. Every episode, we talk to a leading expert about the latest developments in the Internet of Things. Today, I’m discussing the new 12th Gen Intel® Core processors with Kenny Chang, Vice President of the System Product Business Unit at ASRock Industrial.

The new CPUs, which were formally named Alder Lake, were just announced at CES 2022. They pack a ton of cool features, like an all-new hybrid architecture, massively upgraded GPUs, and real-time capabilities. As one of the first companies to come to market with the new chips, ASRock has unique insights on these processors. So I’m really looking forward to hearing Kenny’s thoughts.

So with that, Kenny, I would like to welcome you to the podcast.

Kenny Chang: Hi, thank you for having me and it’s my honor to join the podcast.

Kenton Williston: Yeah, absolutely. I’m curious about your career. What have you done before your previous role at ASRock?

Kenny Chang: I was in charge of the product-development division as the vice president, and our product is major in medical devices for what we call IoM. That means Internet of Medical. And the major equipment we are developing is the front panel dictator. It is used for the X-ray system. That’s what I did before I joined ASRock Industrial.

Kenton Williston: So as I was just saying, there are so very many new features in the latest Intel core processors. So there are a lot of things we could spend our time talking about, but the first thing I would like to know is what features you are most excited about in the latest Intel core processors and why?

Kenny Chang: I think the most amazing feature is about the hybrid architecture, combining the performance core as well as the efficient core. I think that gives us the very flexible way to manage, especially in software-defined everything, we can adjust which core is doing what kind of jobs accordingly. And I think this is the major benefit when we adopt the Alder Lake-S processor into our products.

Kenton Williston: Yeah, that totally makes sense. And I should mention too that this podcast as well as the overall insight.tech program are produced by Intel, so of course, we have very specific reasons to want to talk about the latest Intel technologies. But having said that I agree the hybrid architecture is very interesting. This is something that’s become more popular in a variety of CPU designs and, just like you said, having both high performance and good efficiency is a really amazing combination.

You can combine these two things together and not have to give up low power to achieve high performance or vice versa. That’s very, very helpful. One of the things that Intel has been saying about these new processors is that they are revolutionary, which is, of course, a very strong claim and I’m wondering how you think they might revolutionize IoT applications.

Kenny Chang: The IoT application is very diversified and across various vertical applications such as the automation, automobile, smart city, energy, or even smart retail. We have so many software applications onto the end products. We are going to the microservice enabled by the containers. That means there’s so many containers running simultaneously on the one edge platform. Especially the edge is the most important. I think it is a major migration from the data center to decentralized. That means we have edge computing running on the local sites. They can reduce then enforce all the latency between the cloud and the devices. So edge is the best solution for this kind of situation.

Back to the 12th Generation Alder Lake processor here, we can embed the powerful processor into the  edge devices. So, with that, we need not only powerful, but also traceable architecture to deal with all the microservices or tasks. That’s the first one I would like to highlight here. And the second one is, we do lots of critical missions on the edge. The real time is really a method for every operation, and our 12th generation possessor Alder Lake-S featuring about the real-time controlling by the TSN and the TCC.

So that’s a good feature to make us to have more confidence: to make all tasks be occurred, synchronization in the real-time measure. This is the two major key benefits for the IoT application.

Kenton Williston: Yeah, I think those are all really good points. So, let’s see if I can summarize those and add hopefully something useful on top of it. The point you made about microservices and containers, I think is a very good one. It’s a very reflective of a big change in how edge computing is done. I hate to date myself like this, but I’ve been working in this space now for let’s see, I guess this was going to be 22 years this year. When I began my career, the things that you would find in what we’d call now, edge computing, which back then would be called embedded computing, were very specialized code. You had to have very specific knowledge to write for these devices. And now people are looking more and more to use the same kind of coding practices that you would find people using in the cloud.

And I think this is a good change because one thing that’s very good about it is it gives you so much more flexibility. Because it is important, I think, to move some things out of the cloud to the edge, but of course there’s always this back and forth. Sometimes things need to be decentralized, sometimes they need to be centralized. And I really like the way that with modern application development processes, you gain a lot of flexibility. The microservice can just run wherever it makes the most sense. But of course you need, in order for that to happen, you need to have a platform that will support running these microservices. And I think the older lake platform is a very good one for that.

Kenny Chang: Yeah, exactly.

Kenton Williston: And then you mentioned the importance of the hybrid architecture, and one of the things that’s important here is you can configure your system in such a way so that the less performance hungry microservices, and just in general the less performance hungry tasks, can run on the efficient cores, so that if you don’t actually need the high performance cores at the moment, you’re running very efficiently, very low power, which is important for all kinds of reasons. Obviously, if you have something that is battery powered, it’s very important not to draw too much power, but even if you have something that is plugged into the wall, in many situations, it’s very important not to have too high of thermals because then you start having much more complicated systems that are less reliable and have more moving parts and all these sort of things.

And of course, in addition to this overall trend of edge equipment looking more and more, at least from a software perspective, the same as what’s running in the cloud, there’s been this very strong trend towards IT/OT convergence. And so things that, a lot of business workloads, increasingly are overlapping with IoT devices. And so it’s very useful to have a platform that can run different business services as well as edge computation. So there’re all kinds of used cases for this, where you might want to combine things in a lot of new and interesting ways.

And one thing in particular, you’re talking about the importance of moving things out of cloud to the edge for the purposes of minimizing bandwidth utilization and latency. And similarly for the real-time computing capabilities, which I believe these are the first core processors that offer that feature, these are both important capabilities in many applications, but one of the things that’s been really growing quickly has been AI applications, which potentially can be very, very data hungry. And I think this is a perfect example of where the computing really needs to happen at the edge. Would you agree with that?

Kenny Chang: Yeah, sure. Absolutely.

Kenton Williston: And that leads me out to another thing I wanted to get your opinion on, is the GPU. So they’re very, very heavily upgraded. And of course, when Intel announced these new parts, the main thing they showed off was how you could play the latest and greatest games on these processors, but maybe not the most relevant thing for industrial healthcare applications. But there is a lot of relevance that it might surprise people, because you can actually use these GPUs to accelerate AI workloads quite a bit. Is that something that you’re seeing as an important way to use these processors?

Kenny Chang: As you mentioned, AI is the mega trend. And right now we can see the Alder Lake-S, they have a big improvements on the GPU performers. As I know they will have 1.94 faster graphic performance, and up to 2.81 faster GPU image classification performance compared to the previous generation. The great benefits for us is we can eliminate the additional GPU card put into our box. That’s good for us, especially in the industry use case, they can reduce lots of maintenance cost on that. We don’t need any simplification of the performance, as well.

Kenton Williston: Yeah, I think that’s all very true. And I think there’s an awful lot of benefit to be had from being able to execute image classification and other AI workloads directly on the CPU. A less complex, easier to maintain lower cost system if you don’t have to add a graphics card. And that I think has always been true. If you can avoid adding more parts, it’s always better, but especially at the moment, graphic cards are very hard to obtain. And of course they’re very expensive when you can get a hold of them. So, I think it’s really nice to have a platform where you can get a tremendous amount of performance out of the GPU right out of the box without having to add any cards. Are there any other benefits to the GPU upgrade? Like I said, probably your customers are not too worried about gaming, but they are still quite upgraded and I’m wondering if you’re seeing any other use cases beyond things like image classification for the GPUs.

Kenny Chang: Another case is about the factory automation. We have the AOI virtually integrated with the AI capability to enhance the capability and the productivity for the defect inspection without a GPU card. They also have the more complex size which can be put into the enclosure. That’s the other key feature for us, to have the same performance, but with the more complex size integrated into our production line.

Kenton Williston: Yeah. That makes sense. And you’re making a very good point that, just the size of the solution by itself can be very important. There are a lot of applications, such as smart city applications, where you might need to squeeze the equipment into an existing space that’s quite limited, or on a manufacturing line where it’s already crowded, any space savings you can get will be very beneficial. That is a very good point. And I’m glad you brought it up. One thing I’m wondering, beyond the hardware attributes, of course, you need to be able to program these things. And I’m wondering from a software perspective, how your customers can best take advantage of all of these new features?

Kenny Chang: Yeah. I think the hybrid architecture, some of the heavy workloads, they need huge powerful processor to deal with that. So, they can assign the task onto the P-cores , what we call the performance core. But some of background, as such as the impact management, they don’t need so much powerful processor to deal with states. They can choose the efficient core to take this job. So this also means they can easily assign: which task, which core. So, I think the major benefits for the software development guys, to respond, to leverage the technology here.

Kenton Williston: And are your customers using things like Intel® oneAPI to take advantage of this hardware?

Kenny Chang: They are mostly using the OpenVINO for the AI inference task. Intel also has the oneAPI. I think it is good for them to get the API that they want on the one platform. That takes a lot off all their workloads, this platform, I hear from them.

Kenton Williston: That’s really great. And I’m glad you mentioned the OpenVINO architecture. So this is a platform that provides a layer of software abstraction so that you can create and implement different kinds of AI algorithms without having to know all the fine details of the architecture. And it’s very useful when Intel does things like this latest generation Intel core processor, which has a very much faster GPU. You don’t have to worry necessarily about rewriting your code. You just get a performance boost, which is very, very helpful. So, I’m wondering in practice how some of this is playing out. So, I understand you worked with a company called DMS, and you mentioned one of your major lines of business is automated optical inspection. And I understand you did some work with DMS to use the 12th Gen Intel® Core processors. Can you tell me about some of the challenges that company was facing and how they benefited from using Alder Lake?

Kenny Chang: For the first touch with our customer, they are introduced to the AOI with AI to enhance the accuracy and the efficiency. They really did a good job compared to not introducing the AI task into the AOI. But they also encounter challenges regarding the data transmission. The time between the computer A to computer B, it would take too long to complete the data transition. As you know, the image size is very big, a few megabytes for one image. We saw this challenge and hear the pain points from their viewpoint. Then we think about the Alder Lake-S processor, where it would face their headache a lot. So, we introduced the proposal to them. We would like to build up the workload-consolidation solution. That means we will integrate the computer A and the computer B into one platform. And the most important, we are using the virtualization by the KBN combining the Windows OS and the Linux OS onto the one hardware platform. And we address the data transmission by the shared memory technology. That can make it 100 times faster compared to the previous one.

Kenton Williston: Yeah. And I think this is a really good example. I really appreciate you sharing that with us, Kenny. So, I think many, many factories and other sorts of industrial use cases are in a similar situation right now. AI is the big mega trend right now. It’s being deployed just in all kinds of use cases, but it is very difficult, if you’ve got some existing equipment, to just keep adding some additional equipment to perform AI, because like you said, many times what’s happening with the AI is processing a huge amount of data, whether that’s images or other high bandwidth sensor data. So, sometimes it’s really just the network that is the constraint, even just the local network, never mind going to the cloud, that would prevent you from adding AI to your system. So, the fact that you have very IT-friendly standards based on the platform that can do… It can run windows, it can run Linux, you can have all kinds of different virtualized environments. And you can bring things together on a single platform, making it much easier to add these new capabilities, because you don’t have to have old machine A, new machine B scenario.

It’s like you said, you can just run everything on one machine and support your existing software to a new box, and then start adding all the new things you want. And then in addition to consolidating those workloads, you now have a platform that is very well suited to consolidating other kinds of workloads that you might have in nearby machines or taking things that are currently running into the cloud and bringing them out to the edge.

You have a lot of options there. So, I’m curious, I know that you’re one of the first companies to come to market with a solution for the industrial market based on the 12th Gen Intel® Core processors, how is that possible and how are you working with Intel to deliver these solutions to market quickly and to not just bring in early solutions to your customers, but really the most advanced kind of solutions?

Kenny Chang: We have a long partnership with Intel for a very, very long time. ASRock Industrial is the leading company for the industrial, mobile, and the system production, especially for the industrial applications. As you mentioned earlier, it is very good for us to get the early sample by the EA program with Intel. We also accept lots of performance updates and the technology updates, such as the tool or architecture to help us deal with the vertical markets, such as the edge insight for industrial and edge control for industrial. They can bring more insights how to address customer needs and how to help our customer, especially for the systems integrator to reduce their development time.

They just focus on what they are good at. They don’t need to waste time to deal with the hardware and software integration. So they just put their application stuff onto the box and that’s done. So they save lots of development and working time from there, and they can get the quick-win solution.

Kenton Williston: So Kenny I’m interested, you’re talking about how your close relationship with Intel and  the early access relationship and access to their roadmap and all these things really help you build application-ready boxes. Can you give me an example of what some of the features might be for some of the solutions you’ve got available now using the latest Intel core processors?

Kenny Chang: Well, that’s good question here. Initially, the feature was for a workload consolidation. More precisely we can say it is the middleware. What we did for customers, just like the case I mentioned before for the AI AOI, we put the virtualization middleware KBN onto our hardware box, and we know how to enable the shared memory and then turn it into API for our customer. So if a customer would like to use this solution, they can just buy our box and we can have such stuff installed in our system, so they can just open the box and put on their software application, then that’s up and running quickly.

Kenton Williston: Yeah, that’s really interesting. So, it sounds like what you’re telling me is ASRock goes beyond simply putting the hard hardware together and sending the hardware to your customers, but you actually offer a certain level of services to provide some appropriate set up and middleware and these things. So that when the system arrives at the customer, it’s ready for them to start putting their software on it. They don’t have to think about those things. Do I have the right idea there?

Kenny Chang: I think it’s just a very beginning stage. It’s optional items. If customer needs this, we can do such a service for them. I think the right now we will take some time to educate the institution, more ideal to our customer before they adopt the solution.

Kenton Williston: Got it. That makes sense. I wanted to, speaking of education, touch on something we haven’t talked about yet. One of the other new features of the platform or some new hardware security features, do you think these will be important for your customers as well?

Kenny Chang: Well, yes, it’s very, very important. Cybersecurity in tech is the hot topic all over the world. In most cases it’s happening in the IT, but right now, when we introduce the industrial IoT into the industrial automation, there are lots of OT devices. They are very vulnerable.

Let me bring one case I have. The case is about 5G smart pole, smart city. Smart pole is integrated with a lot of devices, not only for the lightning for the streets, but also it has the sensor for the air condition quality. Also, we have the smart camera to monitor the traffic overall in the city. We provide our Alder Lake-S platform solution into the smart pole, which working as the edge server.

All the camera data will go into the edge server for the image classification or any of the dual-based checking by the edge server. But all this data is very sensitive. The benefits to adapt the Intel processor is they are integrated with the software guards we call SG C, as well as they have the PTT technology. So they can make the data be secure in the hardware method. And they also have very good leverage by this from the system integrator to put the software on that and to ensure the security be in place.

Kenton Williston: That all makes total sense. You’re raising some good points about the Intel software guard, or SWG, and the PGP features that are in there as well. It’s very important, like you said, many more systems are coming under attack, and earlier I was mentioning how it’s a good thing that IT and OT are converging and the systems that used to have very specialized code are now much more often becoming, by necessity, more IT-friendly systems that run very familiar operating systems.

This is a good thing because it helps innovation move forward more quickly, but also makes these systems more open to attack than they used to be. I like this example you gave of a smart pole where you might have lighting and cameras and other sensors, and it’s very useful to have real-time visibility into what’s happening in the city, whether it’s air pollution or traffic levels or whatever.

But of course, especially with cameras, there’s always a risk that people could get access to video feeds that they really shouldn’t have access to. And so it is very important to keep in mind security, and of course just the very fact you’re talking about something that’s connected back to government systems.

There can be all sorts of very subtle ways of… once you’ve gotten into a system getting back into very sensitive data, and there’s been cases of people, for example, showing how you can access, say a printer, for example, and then just a couple of jumps and you’re into a very sensitive database. Yes. So, the security is very, very important. So I’m glad we had a chance to talk about that. We’ve covered a lot of different topics. I’m wondering if there is anything that I didn’t ask you about that you would like to add.

Kenny Chang: I just want to add that ASRock Industrial is not only for the hardware provider, we also think about and are working with the Intel verticals to help customers to get the better solution for them. That’s our goal for our end customer. And we also help, by this, co-create to make the world  much better than ever.

Kenton Williston: I love that. That’s great. Well, Kenny, I want to thank you again for joining us today. I really appreciate your time.

Kenny Chang: Yeah. Thank you.

Kenton Williston: And thanks to our listeners for joining us. To keep up with the latest from ASRock Industrial, follow them on LinkedIn at ASRock-Industrial, that’s ASRock-Industrial.

If you enjoyed listening, please support us by subscribing and rating us on your favorite podcast app. This has been the IoT Chat. We’ll be back next time, with more ideas from industry leaders at the forefront of IoT design.

Democratizing AI: It’s Not Just for Big Tech Anymore

AI and computer vision have become commonplace in manufacturing. In manufacturing, if you can see the data, there’s something you can do with the data. But not every industry has that mindset, or the luxury of data scientists on-site. That shouldn’t stop the many new and exciting use cases for AI—from medicine to traffic to agriculture—from taking advantage of these tools.

We talk with Elizabeth Spears, Co-Founder and Chief Product Officer at Plainsight (formerly known as Sixgill), a machine learning lifecycle management provider for AIoT platforms, and with Bridget Martin, Director of Industrial AI and Analytics of the Internet of Things Group at Intel®, about the accessibility and democratization of AI, and how these factors are key to getting the most out of this crucial technology—for companies and, ultimately, for consumers.

Where do things stand right now in terms of new applications in the manufacturing space?

Bridget Martin: There are two different perspectives. Some manufacturers we would consider more mature—where there are automated compute machines already on the factory floor, or in individual processes on the manufacturing floor. They are automating processes, but also—and this is critical when we’re talking about AI—outputting data. And that could be the metadata of the sensors, or of the processes that that automated tool is performing.

These manufacturers are really looking to take advantage of the data that’s already being generated in order to predict and avoid unplanned downtime for those automated tools. This is where we’re seeing an increase in predictive maintenance–type applications and usages.

But then you also have a significant portion of the world that is still doing a lot of manual manufacturing applications. And those less mature markets want to skip some of the automation phases by leveraging computer vision—deploying cameras to identify opportunities to improve their overall factory production, as well as the workflow of the widgets going through the supply chain within their factories.

Can you talk about some new applications making use of this technology?

Elizabeth Spears: A really cool one, which is just becoming possible, is super resolution. One of the places where they’re researching its application is in using less radiation in CT scans. Think of one of those FBI investigation movies where they’re looking for a suspect, and there’s some grainy image of a license plate or a person’s face. And the investigator says, “Enhance that image.” All of a sudden it becomes this sharp image, and they know who did the crime. That technology really does exist now.

Another example is in simulating environments for training purposes in cases where the data itself is hard to get. Think about car crashes, or gun detection. In those cases, you want your models to be really accurate, but it’s hard to get data to train your models with. So just like in a video game, where you have a simulated environment, you can do the same thing to create data. Companies like Tesla are using this for crash detection.

It’s really across industries, and there’s so much low-hanging fruit where you can really build on quick wins. My favorite cases around computer vision are the really practical ones. And they can be small cases, but they provide really high value.

One that we’ve worked on is just counting cattle accurately—and that represents tens of millions of dollars in savings for that company.

How can organizations recognize their AI use cases and leverage computer vision?

Elizabeth Spears: I feel like we often talk about AI as though an organization has to go through a huge transformation to take advantage of it, and it has to be this gigantic investment of time and money. But what we find is that when you can implement solutions in weeks, you can get these quick wins. And that is really what starts to build value.

For us it’s really about expanding AI through accessibility—AI isn’t just for the top-five largest companies in the world. And we want to make it accessible not just through simplified tools but also simplified best practices. When you can bake some of those best practices into the platform itself, companies can have a lot more confidence using the technology. We do a lot of education in our conversations with customers, and we talk to a lot of different departments; we’re not just talking to data scientists. We like to really dig into what our customers need, and then be able to talk through how the technology can be applied.

“For us it’s really about expanding #AI through #accessibility—AI isn’t just for the top-five largest companies in the world.” – Elizabeth Spears, Co-Founder and Chief Product Officer @plainsightAI via @insightdottech

Hiring machine learning and data science talent is really difficult right now. And even if you do have those big teams, building out an end-to-end platform to be able to build these models, train them, monitor them, deploy them, keep them up to date, and provide the continuous training that many of these models require to stay accurate—that all requires a lot of different types of engineers.

So, it’s a huge undertaking—if you don’t have a tool for it. That’s why we built this platform end-to-end, so that it would be more accessible and simpler for organizations to be able to just adopt it.

What are some of the challenges to democratizing AI and what is Intel® doing to address those?

Bridget Martin: Complexity is absolutely the biggest barrier to adoption. As Elizabeth mentioned, data scientists are few and far between, and they’re extremely expensive in most cases. This concept of democratizing AI and enabling, say, the farmers themselves to create these AI-training pipelines and models, and to deploy, retrain, and keep them up to date—that’s going to be the holy grail for this technology.

We’re talking about really putting these tools in the hands of subject-matter experts. It gets us out of the old cycle—take a quality-inspection use case—where you have a factory operator who would typically be manually inspecting each of the parts going through the system. When you automate that type of scenario, typically that factory operator needs to be in constant communication with the data scientist who is developing the model so that the data scientist can ensure that the data they’re using to train their model is labeled correctly.

Now, what if you’re able to remove multiple steps from that process and enable that factory operator or that subject-matter expert to label that data themselves—give them the ability to create a training pipeline themselves. It sounds like a crazy idea—enabling non–data scientists to have that function—but that’s exactly the kind of tooling that we need in order to actually properly democratize AI.

Because when you start to put these tools in people’s hands, and they start to think of new, creative ways to apply those tools to build new things—that’s when we’re really going to see a significant explosion of AI technologies. We’re going to start to see use cases that I, or Elizabeth, or the plethora of data scientists out there, have never thought about before.

Intel is doing a multitude of things in this space to enable deployment into unique scenarios and to lower the complexity. For example, with Intel® Edge Insights for Industrial we help stitch together an end-to-end pipeline as well as provide a blueprint for how users can create these solutions. We also have configuration-deployment tools to help system integrators install technology. For example, if a SI is installing a camera, our tools can help determine the best resolution and lightning. All these factors have a great impact on the ability to train and deploy AI pipelines and models.

How can organizations go about starting this journey?

Elizabeth Spears: There are so many great resources on the internet now—courses and webinars and things like that. There’s a whole learning section on the Plainsight website, and we do a lot of “intro to computer vision” events for beginners.

But we also we have events for experts—where they can find out how to use the platform, and how to speed up their process and have more reliable deployments. We really like being partners with our customers. So, we research what they’re working on, and we find other products that might apply as well. We like really taking them from idea, all the way to a solution that’s production ready and really works for their organization. 

How is Intel working through its ecosystem to enable its partners, end users, and customers?

Bridget Martin: One of my favorite ways of approaching this is to really partner with that end customer to understand what they’re ultimately trying to achieve, and then work backward. Also, one of the great things about AI is that you don’t have to take down your entire manufacturing process in order to start playing with it. It’s relatively easy to deploy a camera and some lighting and point it at a tool or a process. And so that is really going to be one of the best ways to get started.

And of course, we have all kinds of ecosystem partners and players that we can recommend to the end customers—partners who really specialize in the different areas that the customer is either wanting to get to, or that they’re experiencing some pain points in.

How does Plainsight address scalability, and how does Intel help make an impact here?

Elizabeth Spears: We look at scale from the start, because our customers have big use cases with a lot of data. But another way you can look at it is to scale through the organization, which really comes back to educating more people. We’ll talk to a specific department within a company, and someone will say, “I have a colleague in this other department that has a different problem. Would it work for that?”

Concerning Intel—because we’re a software solution, Intel’s hardware is definitely one of the places that we utilize them. But they’re also really amazing with their partners—bringing partners together to give enterprises great solutions. 

What do you both see as some of the most exciting emerging opportunities for computer vision?

Bridget Martin: One, I would say, is actually that concept of scalability. Not just scaling to different use cases, but also scaling to different hardware—there’s no realistic scenario where there is just one type of compute device involved. I think that’s going to be extremely influential, and really help transform the different industries that are going to be leveraging AI.

But what’s really exciting is this move toward democratization of AI—really enabling people who don’t necessarily have a PhD or specialized education in AI or machine learning to take advantage of that technology.

Elizabeth Spears: I agree. Getting accessible tools into the hands of subject-matter experts and end users, making it really simple to implement solutions quickly, and then being able to expand on that. It’s less about really big AI transformations, and more about identifying all of these smaller use cases or building blocks that you can start doing really quickly, that over time make a really big difference in a business.

Related Content

To learn more about the future of democratizing AI, listen to Democratizing AI for All with Plainsight and Intel® and read Build ML Models with a No-Code Platform. For the latest innovations from Plainsight, follow them on Twitter at @PlainsightAI and on LinkedIn at Plainsight.

 

This article was edited by Christina Cardoza, Senior Editor for insight.tech.

CES 2022: Intel® Launches Revolutionary CPU Architecture

Intel® made big news at CES 2022 with the launch of their 12th Gen Intel® Core Desktop and Mobile processors, formerly known as “Alder Lake”.

What makes these chipsets groundbreaking compared to previous-generation processors? First, increasing workload diversity—driven by the demands of IoT, AI, and visual edge computing—means we need enabling technologies that are more flexible. And this new reality calls for a brand-new approach to processing.

The 12th Gen Intel® Core processors introduce a hybrid core architecture across Desktop and Mobile SKUs for the first time in x86 history. Here’s what you need to know:

New Hybrid Core Architecture: The Benefits Are in the Benchmarks

The new hybrid architecture is built with both Performance- and Efficient-cores, combined to deliver Intel’s biggest desktop performance gains in more than a decade—without sacrificing additional power. Want proof? Check out these benchmarks in Figure 1.

12th Gen Intel® Core™ desktop processors compared to 10th Gen Intel® Core™ processors
Figure 1. 12th Gen Intel® Core™ desktop processors’ performance compared to 10th Gen Intel® Core processors, the previous generation in this series for IoT. For workloads and configurations. Results may vary. (Source: Intel®)

The new processors give retail, healthcare, digital signage, industrial automation, and other edge system designers unprecedented platform control, allowing them to transition seamlessly between top-line productivity and resourceful task completion.

With as many as eight cores of each type—supporting multiple execution threads—developers can consolidate multiple workloads on a single device. For example, a modern POS system could analyze video and run price checks using object recognition algorithms on Performance-cores while the Efficient-cores simultaneously read barcode scans, tally receipts, and accept payment.

Check out this video highlighting all the goodness of the 12th Gen Intel Core processors, from performance to graphics—from media to AI.

Whether you’re designing a video wall, test equipment, #medical imaging system, or a #MachineVision solution, 12th Gen Intel® Core #processors building blocks are now available from @IntelTech partners. via @insightdottech

Digital Signage Struts Its Stuff

Want more proof? A remarkable Intel IoT Video Wall Solution Demo consists of four ViewSonic VP3268A-4K LCD displays. Behind the scenes is a media player powered by a 12th Gen Intel® Core i9 Desktop Processor Reference Validation Platform (RVP).

In a demo at CES, the 12th Gen Intel Core Desktop processors’ four video outputs are synchronized into a continuous large image of a video playlist that spans the four-screen display at 4K resolution. But there’s a lot more going on beneath the surface. New IoT features like Genlock and Pipelock will be key components of the video wall designs of the future.

And although they’re obviously tailored to digital signage, these features represent just a fraction of the capabilities introduced on 12th Gen Intel Core processors.

Available Now for the IoT Edge

Whether you’re designing a video wall, test equipment, medical imaging system, or machine vision solution, off-the-shelf, long-lifecycle 12th Gen Intel Core processors building blocks are now available from Intel partners. These subsystems can jump-start your next design, but they aren’t demo platforms. They’re the real, production-ready deal.

For example, manufacturers are leveraging the new 12th Gen Intel Core Desktop processors in the ASRock Industrial iEPF-9010S to enable workload consolidation and data acceleration in automated optical inspection systems. Others are turning to the SECO CHPC-D80-CSA, a COM-HPC client module that integrates hardware security and time-sensitive networking (TSN) alongside the 12th Gen Intel Core processors high-performance graphics processing.

In semiconductor testing, the Advantech SOM-C350 COM-HPC Client module is setting new standards for data throughput with 12th Gen Intel processors that combine PCIe Gen 5 and DDR5 support. High-end sockets also benefit from devices based on the new chipsets, such as the Avnet Embedded C6B-ALP COM Express Type 6 module and BCM Advanced Research MX670QD Mini-ITX motherboard. These have found homes in robotic surgery and medical imaging, respectively.

Digital signage like the demo video wall can be built on solutions such as Shenzen Decenta Technology’s new Mobile Series-based OPS Module. It sports Intel® Wi-Fi 6E and USB4 interfaces, plus local AI inferencing via the Vector Neural Network Instructions (VNNI). AI workloads on all 12th Gen Intel Core processors can also be accelerated by the Intel® OpenVINO Toolkit.

Any developer can use the Intel® oneAPI Toolkit to harness the hardware-accelerated features mentioned above from the friendly confines of their software stack. And it’s easy to get these stacks initialized thanks to native support for robust software offerings, including the UEFI BIOS Slim Bootloader, hypervisors, multiple Linux distributions, and the Microsoft Windows 10 IoT Enterprise 2021 Long-Term Servicing Channel (LTSC).

It’s all ready for you out of the box.

A New Era of Edge Computing Starts Now

As more, different types of objects leverage electronic intelligence, workloads are changing, and the emphasis is shifting from having the most processing to the right processing. Developers have been waiting for hardware that offers a path forward, and 12th Gen Intel Core processors deliver.

To learn more, check out the 12th Gen Intel Core Desktop processors and 12th Gen Intel Core Mobile processors product briefs.

Related Content

Read about advances in visual computing driven by the Intel OpenVINO Toolkit in Intel Innovation: The Event Designed by Developers for Developers.

2 Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. For more complete information about performance and benchmark results, visit intel.com/PerformanceIndex.

IoT Virtualization Jump-Starts Collaborative Robots

The next evolution in manufacturing automation is being conceptualized around collaborative cobots. Cobots—a type of autonomous robot—are capable of laboring safely alongside human workers. But while their advantages seem obvious, the design of these complex systems is anything but.

Yes, most of the enabling technologies required to build a cobot exist today. And many are already mainstream, from high-resolution cameras that let robots see the world to multicore processors with the performance to locally manage IoT connectivity, edge machine learning, and control tasks.

The challenge is not so much the availability of technology as it is the process of bringing it all together—and doing so on a single platform in a way that reduces power consumption, cost, and design complexity. A logical starting point to achieve this would be replacing multiple single-function robotic controllers with one high-end module. But even that’s not so simple.

“Collaborative robots have to perform multiple tasks at the same time,” says Michael Reichlin, Head of Sales & Marketing at Real-Time Systems GmbH, a leading provider of engineering services and products for embedded systems. “That starts with real-time motion control and goes up to high-performance computing.”

“The increasing number of sensors, interactivity, and communication functionality of collaborative robots demands versatile controllers capable of executing various workloads that have very different requirements,” Reichlin continues. “You need to have these workloads running in parallel and they cannot disturb each other.”

This is where things start to get tricky.

IoT Virtualization and Collaborative Robots in Manufacturing

One of the benefits of multicore processing technology is that software and applications can view each core as a standalone system with its own dedicated threads and memory. That’s how a single controller can manage multiple applications simultaneously.

Historically, the downside of this architecture in robotics has been that viewing cores as discrete systems doesn’t mean they are discrete systems. For example, memory resources are often shared between cores, and there’s only so much to go around. If tasks aren’t scheduled and prioritized appropriately, sharing can quickly become a resource competition that increases latency, and that’s obviously not ideal for safety-critical machines like cobots.

How do you construct a multi-purpose system on the same #hardware that can safely share computational resources without sacrificing #performance? The answer is a real-time #hypervisor. @CongatecAG via @insightdottech

Even if there were ample memory and computational resources to support several applications at once on a multicore processor, you still wouldn’t be able to assign just one workload to one core and call it a day. Because many applications in complex cobot designs must pass data to one another (for example, a sensor input feeds an AI algorithm that informs a control function), there’s often a real need for cores and software to share memory.

This returns us to the issue of partitioning, or as Reichlin put it previously, the ability for workloads to run in parallel and not disturb one another. But how do you construct a multi-purpose system on the same hardware that can safely share computational resources without sacrificing performance?

The answer is a real-time hypervisor. Hypervisors manage different operating systems, shared memory, and system events to ensure all workloads on a device remain isolated while still receiving the resources they need (Figure 1).

Figure depicting the Real-Time Hypervisor’s multi-core and multi-OS systems.
Figure 1. The Real-Time (bare metal) Hypervisor provides hardware separation and rigid determinism. (Source: Real-Time Systems GmbH)

Some hypervisors are software layers that separate different applications. But to meet the deterministic requirements of cobots, bare metal versions like the Real-Time Hypervisor integrate tightly with IoT-centric silicon like 6th gen Intel® Atom and 11th gen Intel® Core processors.

The Atom x6000E and 11th gen Core families support Intel® Virtualization Technology (Intel® VT-x), a hardware-assisted abstraction of compute, memory, and other resources that enables real-time performance for bare-metal hypervisors.

“To keep the determinism on a system, you cannot have a software layer in between your real-time application and hardware. We do not have this software layer,” Reichlin explains. “Customers can just set up their real-time application and have direct hardware access.

“We start with the bootloader and separate the hardware to isolate different workloads and guarantee that you will have determinism,” he continues. “We do not add any jitter. We do not add any latency to real-time applications because of how we separate different cores.”

Data transfer between cores partitioned by the RTS Hypervisor can be conducted in a few ways depending on requirements. For example, developers can either use a virtual network or message interrupts that send or read data when an event occurs.

A third option is transferring blocks of data via shared memory that can’t be overwritten by other workloads. Here, the RTS Hypervisor leverages native features of Intel® processors like software SRAM available on devices that support Intel® Time-Coordinated Computing (Intel® TCC). This new capability places latency-sensitive data and code into a memory cache to improve temporal isolation.

Features like software SRAM are automatically leveraged by the Real-Time Hypervisor without developers having to configure them. This is possible thanks to years of co-development between Real-Time Systems and Intel®.

Hypervisors Split Processors So Cobots Can Share Work

The rigidity of a bare metal, real-time hypervisor affords design flexibility in systems like cobots. Now, systems integrators can pull applications with different timing, safety, and security requirements from different sources and seamlessly integrate them onto the same robotic controller.

There’s no concern over interference between processes or competition for limited resources as all of that is managed by the hypervisor. Real-Time Systems is also developing a safety-certified version of their hypervisor, which will further simplify the development and integration of mixed-criticality cobot systems.

Reichlin expects industrial cobots ranging from desktop personal assistants to those that support humans operating heavy machinery will become mainstream over the next few years. And most will include a hypervisor that allows a single processor to share workloads, so that the cobot can share the work.


This article was edited by
Georganne Benesch, Associate Content Director for insight.tech.

IoT Predictions: What to Expect in 2022 and Beyond

Martin Garner

[podcast player]

What can we expect from the IoT world in 2022? If the past two years taught us anything, it is that we cannot prepare for everything. But some trends and technologies can help guide our way. Consider how the rise of AI has pointed to more intelligent IoT solutions—making the tools easier for everyone to use. This, in turn, could result in stronger regulations or efforts for trustworthy AI.

Or think about how the move to a remote workforce as well as increased virtual care services point to a broader use of 5G to support home broadband and ensure connectivity going forward.

And there’s still so much more to look forward to. In this podcast, we talk about lessons learned in 2021, IoT technology trends to pay attention to in 2022, and how the IoT landscape will continue to evolve beyond next year.

Our Guest: CCS Insight

Our guest this episode is Martin Garner, COO and Head of IoT research for CCS Insight, where he focuses on the commercial and industrial side of IoT. Martin joined CCS Insight in 2009 with the desire to work with a smaller, independent firm focused both on quality and clients. Every year, CCS Insight publishes predictions on network technology, telecoms, and the enterprise. This is the 15th year that CCS Insight is publishing its predictions.

Martin answers our questions about:

  • (3:01) CCS Insight predictions in 2021: What went wrong and what went right
  • (8:06) Technology trends and predictions for 2022
  • (14:57) How the role of cloud players will evolve moving forward
  • (17:16) Where cloud-like experiences in on-premises infrastructure will fit into the landscape
  • (21:08) Where AI, machine learning, and computer vision are going in the future
  • (26:16) Efforts and impacts of democratizing AI
  • (28:01) How to address AI concerns
  • (30:32) Ongoing transformation of the healthcare industry
  • (34:36) The future of IoT and the intelligence of things

Related Content

To learn more about the future of IoT, read CCS Insight’s IoT predictions for 2022. For the latest innovations from CCS Insight, follow them on Twitter at @ccsinsight and on LinkedIn at CCS-Insight.

 

This podcast was edited by Christina Cardoza, Senior Editor for insight.tech.

Apple Podcasts  Spotify  Google Podcasts  

Transcript

Kenton Williston: Welcome to the IoT Chat, where we explore the trends that matter for consultants, systems integrators, and enterprises.  I’m Kenton Williston, the Editor-in-Chief of insight.tech. Every episode, we talk to a leading expert about the latest developments in the Internet of Things. Today, our guest is Martin Garner, the COO and Head of IOT research at the analyst firm CCS Insight.

They’ve just put out their predictions for 2022 and it is a fantastic read. You can actually go check it out for yourself on insight.tech right now. I am really looking forward to getting into the details of these predictions. So, Martin I would like to welcome you to the podcast.

Martin Garner: Thank you very much.

Kenton Williston: Tell me about your role at CCS Insights and what brought you to the firm?

Martin Garner: Sure, well I have two roles at CCS Insights. One is that I’m Head of IoT research where I focus mostly on the commercial and industrial side of IoT for that. I’m also COO here and I joined CCS Insights in 2009 after Ovin was sold to Informa Group and later became Omnia. I was chief of research there and the attraction of coming to CCS Insights was that it’s a smaller firm, but very quality and client focused and independent, and obviously being smaller, had very good growth opportunities. And I’m happy to say those are all still true 12 years later.

Kenton Williston: Excellent, so on that note, I’d like to know a little bit more about CCS itself, CCS Insights and its annual prediction. So what is this beast?

Martin Garner: So well CCS Insights is a medium sized analyst firm covering quite a lot on the consumer side, very strong on the mobile technologies and devices, quite a lot on the telecoms side itself, the networks and the network technologies, and also strong on the enterprise side, how they use a lot of the technologies ranging from what happens in the workplace through to digital transformation of operations in the industrial world. And the predictions is something that we do each year. Last year in 2021, that was our 14th run of predictions. Now several analyst firms do these. What makes ours a little bit different is that we deliberately do it as a complete cross-company thing across all topic areas. And also all staff contribute to prediction. Some of our best ones historically have come from people who aren’t analysts at all. The other thing is that we carefully track what we get right and what we get wrong and we publish some of that each year. And the aim is to be quite transparent about that and to improve what we’re doing.

Kenton Williston: So one of the things I really like in what you just said is going back and revisiting your prior years’ predictions to see how things played out. That’s really great. The times we’re in have been very difficult to predict. I don’t think there’s any doubt about that. So very curious how the predictions for 2021 played out and what went right, what went wrong?

Martin Garner: Yeah, you’re right. That was a particularly interesting year because it was the first year we were in pandemic conditions. Lots to think about, lots to speculate about. We got a few that we were quite pleased we got right. One was that COVID would accelerate adoption of robots, automation, and IoT across sectors. Now it didn’t initially look like that. There was a pause in investment, but it did then accelerate as people realized they needed this stuff to keep their operations going. Another one was that 2021 would be the year of vertical clouds. And we have since then seen big launches from all of the major players here and that plays into what we’re doing in IoT. And another one was that security and privacy in AI and machine learning would become much stronger areas of concern. I think it’s now widely understood that machine learning is quite a big attack surface and it could be really hard to detect a hack, at least initially.

Now we did get a few wrong that year as well. So we did predict that somebody would buy Nokia and no one did. We also predicted that the regulation of the big tech players would slow down and countries would take more time. Actually in China it’s grown much stronger much more quickly. And that’s being echoed to some extent, both in the US and in Europe. So actually that’s moving faster than we expected. And then there’s a few that we’re waiting on, which were longer-term predictions. So for example, a big cloud player will offer the full range of mobile network solutions by 2025. Now we have seen some big moves in 5G from AWS, from Microsoft, and from Google, but nothing yet on quite that scale. Another one was that tiny AI would move up to 20% of all AI workloads. Now this is mostly an IoT thing where small edge devices really need small AI. There is a lot going on, especially in IoT and the role is growing, but we’re not at that level yet.

Kenton Williston: So one thing you mentioned there I’d love to get a little clarification on is what do you mean by vertical clouds?

Martin Garner: Sure, this is a cloud service. Many cloud services have been offered as a purely horizontal infrastructure thing, like data storage, which everybody has a need for, but actually each sector stores different types of data with different labels, different metadata, different language used even, and they measure things in different ways across sectors, even down to things like the impact of carbon footprints within the sector and so on. And what a number of the offerings from the cloud players are now doing is packaging those up in a way that’s suitable for manufacturing or for automotive or for retail or for healthcare, those kind of things, and deliberately fixing them in the right language, the right constructs, the right metadata and so on, so that they can be more easily adopted directly into specific verticals. It’s one thing I think to launch those services, it’s something else to get them all adopted across those sectors. That’s just a long road to get a big share of that going around the world. And we’re in that stage now.

Kenton Williston: Fascinating, and the funny thing is I think there’s been a lot of really interesting activity both on the cloud side, everything you’re describing about very industry-specific use case, specific activity happening in the cloud, and also just a tremendous amount of activity happening at the edge over the last year, and I think it will be pretty important going forward. So as we’re recording this, for example, just yesterday Intel® announced its latest core processors, which some of the things that are notable there are they’re offering a tremendous upgrade in performance for the edge as well as considerable advances in power efficiency and quite a bit of addition of AI capabilities, graphics, just all kinds of things that are happening. And you mentioned, for example, some of the things that we’re waiting on so to speak are AI at the edge, and there’s just so, so much of that happening at the edge. So it’s I think a really exciting transitional time right now.

And this is probably a good opportunity for me to mention, since I said something about Intel and its fabulous 12th generation core processors, that the insight.tech program and this podcast itself are Intel productions. So full disclosure there, but that leads me to looking forward into this next year with all this tremendous change that is happening in the technology space. What is on your mind for 2022?

Martin Garner: I think overall we have 99 predictions for 2022 and beyond. And we obviously can’t go through all of those here. What we did for this podcast is we did a cut of those that are relevant in some way for the IoT community, and we’ve packaged that up in a report which is available as a download from insight.tech. And I’ll just highlight a few that caught my attention, if that’s okay. So there were a few around the follow on from COVID, and a couple were that by 2025 there’ll be somewhat less use of office space in the developed world. We reckon it’ll be down about 25% by then. Also as a sort of balancing factor, there’ll be much more use of 5G as an additional home broadband for home working. We think maybe 10% of households will have that. I think we’ve all had the experience where you’re trying to do a Zoom call or a Teams call or a podcast and your broadband goes off, and it’s really, really frustrating.

So more backup there. We also saw, coming out of last year, much higher attention on sustainability, and we really think that clean cloud is going to be something of a battlefield this year, partly in cloud services. We also think that IoT can really benefit from using sustainability in its marketing. IoT is great news for sustainability, generally speaking, and we’re not mostly making enough use of that. We also think sustainability will be built into the specifications for 6G, when we get there. And then there’s quite a lot around IoT itself. So, much greater focus on software, machine learning, shift towards higher intelligence of things. Much greater linkage between smart grid and wide area networking. We actually expect to see a pan-utility, where one company is both an energy provider and a network provider doing both by 2025, because those two networks are becoming remarkably similar.

And then there’s also the arrival of antitrust cases in IoT, as a lot of IoT suppliers really like to lock down their maintenance contracts, and that’s attracting antitrust attention. And we think that people will need to move to an as-a-service–type business model in order to avoid antitrust attention. And then as you mentioned, lots and lots on edge computing and mobility. We think the two are going to cause quite a big change in terms of which suppliers do what things across enterprise, telecoms, computing, and internet services. We expect to see all the boundaries changing over the next few years, new players taking different roles and so on. So we think there’s a lot of change, a lot to look forward to, and of course some threats in there for traditional suppliers, but super interesting few years.

Kenton Williston: Yeah, for sure. So some of the things that stand out to me, and boy, there’s a lot to chew on here. I think you’re right about sustainability being a really big deal going forward. And I totally agree that we’ll see it everywhere. Myself, for example, recently taking a stroll down a street here in Oakland where I live, and I noticed that the lights were brightening as I was taking my evening stroll as I walked past them. Even just these little simple things can make a huge difference in energy consumption, and of course there’s much more sophisticated use cases beyond that.

Martin Garner: What we find is that with IoT, you’re often monitoring things that have never really been monitored before, like streetlights. And so the savings you can make by doing more intelligent things with them are just enormous.

Kenton Williston: Yeah, absolutely. One of the things that stands out to me is this idea of linking the smart grid with networking. And we actually did a podcast recently with ABB talking about this very idea. We need to have so many intelligent end points in the 5G network ,and presumably going forward in the 6G networks to support all of these small cells and private networks.

And it’s really similar for the smart grid where you need to push intelligence out to the edge to achieve sustainability and resilience. And of course, both applications need a combination of power and communication. So why not put the two together?

Martin Garner: I think that’s right. It’s the decentralization which is the big commonality, plus the kind of cloud architecture that they’re building in. So in the energy grid, you’ve got now lots and lots of smaller energy generators through solar and wind farms and so on at the edge, and they’re pushing energy into what used to be a very centralized system. And it’s an exact parallel with IoT. We’re generating so much data at the edge thanks to IoT, and we’re pushing that into the network, where we used to depend mostly on things like YouTube being streamed from the middle outward. And so it’s a big shift in both cases, and they’re very similar architecturally and topologically and we expect much more convergence across those two.

Kenton Williston: And I think that speaks also very much to the point you made about big changes are happening now in the who does what. So again, just thinking about some of the recent conversations we’ve had in our podcast series, we’ve had a conversation with Cisco, which I believe we’ll publish after this podcast, where they were talking about their efforts in the rail space with national rail transport there in the UK, and how the complexity of what needs to be done and the speed at which things need to be delivered has led them to work very closely with companies who in the very recent past they would’ve considered their competition.

Martin Garner: Right, and we also think that as we get a cloud architecture in a 5G network, then where is the boundary between the cloud where the data lives, and the cloud where you’re now generating the data which is part of the 5G network? I think it’s going to become a really fuzzy boundary, and that creates opportunities for specialist players who might only do edge cloud things and feed that into a telecoms network, or the other way around. We just think the whole who does what, and where are the boundaries, is going to become a much more sophisticated picture than we’ve had before.

Kenton Williston: Yes, for sure, and that leads me to a question that I’d like to dig into a little bit more deeply, about the role of the existing cloud players. We’ve got industry leaders like Amazon and Google and Microsoft, and they have undoubtedly greatly benefited from all the activity that’s been happening in our last couple of years, and I’d love to know a little bit more about how you see their role evolving as we move forward.

Martin Garner: It’s a great question. And we’ve already talked a little bit about the verticals, and one area where they’re all pushing very hard, one vertical is telecoms networks, and we’ve mentioned already that they’re doing more in the 5G world, especially as 5G moves from its current consumer phase more into an industrial phase. But I think one example that illustrates it very nicely is that if you are, say, a global automotive manufacturer and you want a 5G private network in all of your manufacturing sites across the globe, who is best placed to provide that? Well I don’t think it’s the local telco, because they’re not global enough. So it’s more likely to be your big cloud provider, and we think they’re going to become a really key distribution channel for some of the telecom products, even if they don’t offer them themselves on their own behalf. And I think this is a good example of where the domains between what the cloud providers do and what the telecom guys do are going to blur quite a lot over the coming years.

Kenton Williston: Yeah, no, that’s all very interesting. And I think your point about 5G is very well said. And of course we just talked recently to your colleague Richard about a CCS Insights prediction in the 5G space, and I think the evolution of that space is going to be incredibly important, both for the role of the cloud provider, and to your point there’s this whole new concept of a private cellular network that has come along with 5G that I think will be very, very important as we move forward. And much in that same vein, as we talked a little bit in that conversation, I’d love to hear more from your perspective how companies like HPE and Dell are starting to offer cloud-like experiences in the on-prem infrastructure, and where that will fit into the landscape going forward.

Martin Garner: Yeah, absolutely. And the cloud guys really have had a good run at this as far as we can tell, and we’re not expecting that to change much, but we do expect a bit of a shift going on, and now I know that some people think that the market anyway has a fashion swing between what’s centralized and what’s decentralized; what’s cloud, what’s on-prem. And what we’re now seeing is Dell, HP, and other computing providers, that they’re offering cloud-like experiences and they’re offering, this is really important, as a service-business model for on-premises computing so you don’t have to have the big capital costs in order to get started with quite a major computing program. You can do it all on OpEx. Now we’re all reinforcing that. We’re also seeing the big cloud providers offering local cloud containers in on-premises devices, AWS green grass, Azure stack, and so on, and they’re offering as-a-service hardware.

So that whole area is being fueled, and our expectation is that on-premises will, if anything, make a bit of a comeback and that will tend to slow the growth of public cloud, but definitely not stop it. And that’s a trend that’s not going away. Now we also think that IoT is a really, really big part of this because of the strength of edge computing, the fact that we’re generating such a lot of data in industrial IoT systems, and the fact that we need often to act on that data really quickly in, say, a process-control plant or something like that. We can’t do everything just in the cloud, we need the on-premises side, and as IoT grows and grows and grows, we think that will enhance that trend back towards a stronger on-premises suite.

Kenton Williston: Yeah, and I think one of the things that’s interesting there too, is, like you said, there definitely does tend to be a constant pendulum going back really basically to the earliest days of computing as to whether things were centralized or distributed. But I think one of the things a little bit different about our current situation is that the concepts of cloud architecture are showing up everywhere. So of course it’s in the public cloud, but also on-prem systems are starting to look very much like the cloud in terms of things like containers, but so are edge systems. And in fact, I think one of the most important things that’s happening right now from an architectural perspective is moving all of the software that you’re doing to the containerized, as-a-service cloud model so that you can, as these things continue to evolve and the workloads move from one place to another, have the flexibility to deploy these workloads in the public cloud, in a private cloud, on-prem, at the edge, wherever it makes the most sense for whatever you happen to be doing at the moment.

Martin Garner: And you can then manage them centrally. You can do things like optimization across computing stacks. And so it gives you a lot more flexibility.

Kenton Williston: Yes, yes, absolutely. And I think there’s some really good examples of this that are happening in, for example, the machine learning and AI space, where people are doing things like developing the models in the cloud and then bringing down the inference engines, which actually execute the work, into a more local environment, perhaps into an even very lightweight environment at the edge. And I think that’s a good place for me to ask you about where you see those technologies of AI, machine learning, and computer vision going in the future.

Martin Garner: Yeah, and another great question, and this links back to our idea that there’ll be a huge focus on the intelligence rather than the IoT itself. And what we see at the moment is that there’s a very strong focus on the tools for machine learning and AI, making it easier for ordinary engineers in ordinary companies around the world to choose algorithms and to set them up for use, and to build them into your development and your DevOps and things, and have a whole life cycle for your machine learning, just like you do with your other software and so on. But still, I think one of the things we’re seeing is that the machine learning and AI world is full of componenttechnologies. It’s very much similar to the IoT world the way it was a few years ago.

And so it’s actually really challenging for ordinary people to choose and use systems in that area. So we’re also expecting a lot more focus on providing finished systems for machine learning and AI, quite similar to the way Intel did market-ready solutions for IoT. We may even see some of the finished AI bundled into things like market-ready solutions increasingly. Now Intel’s not the only one. Others have made a start on this as well. For example, AWS has Panorama video analytics appliances, which you can just buy on Amazon and plug in, and they come with the algorithms and you can get going really very quickly. They do something similar for predictive maintenance with their monitron system. We also are expecting the role of smaller and specialist systems integrators to grow a lot here so that they can take on a lot of the training and configuration for you, because it’s still true that the widgets that you make in your factory are not the same as other people use.

And so you need to train the models on images of what you are doing. And there’s just a little caveat here, which is that it’s a large task to get thousands and thousands of specialist systems integrators who maybe they originally trained as installers for surveillance systems. They may not be very skilled in machine learning, but we have to get them up to speed in this area. We have to get them comfortable and competent in training on machine learning, because it’s going to be a big part of their role going forward. And then just one thing that follows on. You talked about AI at the edge and so on. One of our other predictions left over from a couple of years ago is that we will move over time toward much more distributed training rather than centralized training.

Kenton Williston: Yeah, so I think it’s all very interesting points. And I think one of the things that really strikes me here, it kind of goes back to the who is doing what, and the fact that we’re seeing technologies just become so pervasive everywhere you look. And people have been talking about, for example, this idea of digital transformation for some number of years, to the point that I think it’s kind of worn out its welcome to a certain extent, but it’s true that everything’s being digitized, and especially I think this year and going forward, people are looking at just increasingly everything’s connected, distributed intelligence everywhere. But this certainly does introduce a lot of complexity in who’s actually going to do this work of adding this intelligence everywhere, how do these systems all talk to one another. You mentioned, I think quite rightly, the challenges of when you start talking about AI, for example, you’ve got a lot of different point solutions, and how do you get these things all to work with each other?

And we had, for example, a very, very interesting conversation with a company called Plainsight. It’s one of our most recent podcasts here, talking about this very challenge that you’re not just going to have data scientists sitting about in every part of an organization, and in fact many organizations won’t have them at all. So how in the world do you go about actually deploying all these great AI capabilities that are out there right now? And so I agree that having trusted partners that enterprises can rely on like systems integrators will be very important going forward, and I think it will be, to your point, very important for folks who have been doing a lot of the physical installation of things and specialize in those sort of areas to team up with partners who really understand this technology in a deep way so they can go to their enterprise customers and do these very complex installations and integrations where you’re bringing a lot of different things together.

Martin Garner: Yeah, that’s right. And then having done that, you then need to trust it enough to run your operations off it. And that’s a different question, isn’t it?

Kenton Williston: Yes, absolutely is. And on that point, there are a lot of efforts happening right now to make especially the AI trustworthy and democratized so that it is more accessible and so that enterprises can put their trust in these systems, and I know that there are significant efforts happening from like IBM and Microsoft, AWS, and Google in these areas. Can you speak a little bit to where you see these efforts going and what kind impact they will have?

Martin Garner: I think this is one of the most fascinating areas in the whole tech sector at the moment. And for sure those players have been leading the technical development of AI and the tools around it, and things like TensorFlow and PyTorch and so on have had a huge impact in making all of this technology much more available and accessible to people who maybe aren’t fully schooled in the technology behind the scenes. And that really has helped the democratization. But I want to sound just a little bit of a warning here, because we think AI is a special category of technology where small assumptions or biases introduced by a designer or an engineer at the design stage can cause huge difficulties in society. We need more layers of support and regulation in place before we can all be comfortable that it’s being used appropriately and properly and we’re all technically competent and so on.

Kenton Williston: Yeah, for sure. And there’s good examples of that even in just our daily lives. There’s lots and lots of firsthand experiences we’re starting to have of AI not behaving the way we expect it. So I think you’re absolutely right that there is going to need to be a lot of work done to ensure that these systems are being used appropriately by good actors and are doing things that we expect them to do. That’s a pretty tough challenge.

Martin Garner: Yeah, and I think we can start to see what those need to be. And there are already quite a few initiatives across some of these areas. So one key aspect is the formation of ethics groups that are not tied to specific companies. I think we need to take away the commercial-profit focus, and focus purely on the ethics before we can really trust totally in that. It’s also clear that to build strong user trust, we’re going to need a mix of other things like external regulation. When you think about cars and traffic, there’s an awful lot of government regulation that goes with that. But we also need then industry best practices and standards, and we need sector-level certification of AI systems. A bit like crash testing of cars. We’re going to need something like that for AI systems.

Then we need to certify the practitioners. There have got to be professional qualifications for people who develop AI algorithms. Maybe we need a Hippocratic oath and things like that. There are all these layers that we’re going to need. They’re being developed and they’re being introduced, but we’re just not there yet. So one prediction in this area that we have is that 80% of large enterprises will formalize human oversight of their AI systems by 2024. In other words, we’re not just going to leave the AI to get on with it. We’re going to need AI compliance officers, we’re going to need QA departments. It’s going to be a whole layer of quality control that we put in place with human oversight before we let it loose.

Kenton Williston: Yeah, for sure. And one industry that comes to mind in particular here when we’re thinking about needing to take extra care to make sure our technology is doing what we want it to do is the healthcare industry. First of all, kudos to all the folks who’ve been working incredibly hard in the healthcare sector, not just the technologists, but the care providers. This has been such a difficult time. And I really cannot express enough gratitude for all the folks who have really just put everything on the line there. Really, really commendable, and a big part of that from the technology side is things like telehealth and telemedicine and virtual care in general have incredibly quickly accelerated, and I think it’s just an amazing accomplishment by everyone who’s been working on that space. But I think there’s a lot left to do still. And I think there are definitely questions in my mind about how do we keep pushing this forward in a way that’s going to be truly beneficial to everyone, patients and care providers alike.

Martin Garner: Yeah, exactly. And I echo your thanks to the healthcare systems in various countries around the world. The effort they’ve put in, the changes they’ve made, and the support they’ve given are unbelievable, and we owe them a huge debt of gratitude. But just coming back to the technology, there are a few things I think which stand out in terms of IoT and the adoption of machine learning and things like that, which we’re coming onto. So one is that healthcare, it’s very easy to talk about healthcare as if it was one thing, but it’s really not. It’s enormous and diverse. And it’s many, many different areas perhaps with different compliance requirements themselves. Also, I think as you mentioned, it’s been historically a bit slow to change, but COVID has really kick-started the adoption of a lot of new ways of doing things. And so we have made a lot of progress over the last two years, but my sense is there’s still a long, big shopping list of opportunities which are enabled by IoT or machine learning or AI that we haven’t really got going on in a big way yet.

And just one example I’ve come across is tracking machines in the hospital. Trying to find machines in the hospital can waste a lot of valuable time for doctors and nurses. And so hospitals often over provision: they put one machine per ward, when actually the usage doesn’t really support that. And it’s just wasteful. So if the machines could be tagged and geolocated within the whole hospital, then they become easy to find. We’ve seen examples where that generates capital savings of 10% to 20% on that type of machine, and that can be really significant amounts of money coming through. So we think there’s a lot more to come in this area, and the great news is that hospitals and the healthcare system is now in a place with change that is much more ready to adopt new systems.

Kenton Williston: Yeah, for sure. It’s interesting, I think. That point you made about the ability to even locate these devices is huge. Even beyond that, we’re seeing some of the stuff we’ve written about on insight.tech, things that are autonomous. I think healthcare settings are an extraordinarily good application for autonomous vehicles. Not in the sense of course like a car, but just self-guided nurse carts and drug-delivery systems and things like this so that you can, rather than have the providers go find these things, have them just directly come to the providers. It’s I think a really incredible opportunity there.

Martin Garner: Absolutely, along with some interesting challenges: how do they use the lift of the elevator that takes them up to the fourth floor? None of that comes easy, but it’s a great opportunity. You’re right.

Kenton Williston: Absolutely, and I should mention here too that I mentioned a couple of our earlier podcasts and forthcoming podcasts, and of course our listeners are very strongly encouraged to subscribe to this podcast series so they can keep up with all that. But I would also very strongly encourage our listeners to go check out insight.tech. There’s just a tremendous amount of very in-depth content on all these things we’ve been talking about, not least of which is the report that you yourself have created with these predictions for the coming year. Definitely worth taking a read of that for sure.

Martin Garner: Hope so.

Kenton Williston: I certainly think so. So on that point, I think a good place to wrap our conversation would be talking a little bit about the bigger picture of where you see things trending, and something that caught my attention was the idea that the Internet of Things will become more of an Intelligence of Things. So can you explain what that means to you, and why think this is happening?

Martin Garner: It’s interesting, isn’t it? I’ve always thought that the label Internet of Things, or IoT, is a bit of a rubbish label, because it really doesn’t describe the full complexity of what’s going on underneath. I think now though, there’s quite a good understanding that IoT is part of digital transformation. You mentioned that’s maybe an overused phrase, but we kind of know what it means. And it’s a big thing that’s going on. IoT is part of it, but actually very few people buy IoT. What they do is they buy a solution to a business issue. And somewhere inside that is IoT used as a technology to make it work. And the real value of IoT is not in the connection that we’ve created with the things, but it’s in how you use the data that you now have access to. And I think if you, if you think about a smart city, for example, with intelligent traffic management or air quality monitoring, then it’s quite obvious that you are more worried about the data than the connection.

And that’s where the value is. And it’s equally true with smaller systems like computer vision on a production line. You don’t care much about the camera, you do care about what it’s telling you, and that’s the distinction. The trouble is we are now generating so much of this data that we increasingly need lots of machine learning and AI to analyze it, and we have to do it at the edge to do it really quickly and so on. So getting the maximum value out of those systems is going to become all about the intelligence you can apply to the data. Probably a lot of that will be at the edge. Now we think there are going to be three main areas for this: obviously monitoring something is useful, but we still need good analytics to help us focus on the right data and not get distracted.

Controlling something is more useful with suitable intelligence, as we said, about streetlights and things like that, we can make huge savings by controlling these things better, but actually optimizing is even more useful. And again, with suitable intelligence, we can now optimize a machine, a system, or a whole supply chain, maybe in ways we never could before. So we think that the Internet of Things, we now understand pretty much what that is and how you go about it and there’s a lot of opportunity, but we understand it. We think that’s going to fade away as a term, and there’ll be much more focus on the intelligence, the way you use it, and the value you get out of exploiting the data you’ve got. Now when we think about specific sectors, like manufacturing or retail or healthcare, there are a few things that jump out. So it’s quite easy to get caught up in the detail of getting all of these things connected. Should we use Wi-Fi, should we use 5G, wired connections?

Of course that’s important, but only up to the point where it’s working, and then you can move on. We will need suitable systems for aggregating and analyzing the data, data lakes analytics, digital twins, machine learning, AI, and so on. And many, many companies are already well down this path, but actually there’s a lot to learn. Each of those areas is quite big and complicated, and you’ll need new technologies, new skills to get really good at those. But then the other bit is that, even assuming you get all of that done, really a lot of the value you get comes from then applying it across the organization and having it all adopted in the various systems that you use. And that’s a people issue more than a technology issue. And we’re back then to one of the truisms of digital transformation, which is that success depends on taking people with you more than on the technology that you’re using to make it all work. And I think that for me, that’s a really interesting point. It’s ultimately a people issue.

Kenton Williston: Yeah, I couldn’t agree more. And I think, to return to an earlier point, you were talking about the who does what, and I think it’s going to be incredibly important as we move forward into this increasingly complex world to have an ecosystem of players who you can count on, who understand the kind of challenges that your organization is facing, where the technology is heading, how to deploy these things. And one of the things we’ve talked an awful lot about on the insight.tech site is how to work with folks who’ve traditionally been thought of as merely distributors of technology.

I’m thinking of the Arrows and CENXs and Tech Datas of the world. Their role has changed a lot, to where they’re gaining an incredible amount of internal expertise on their customer needs. They’re able to provide these more complete Intel market-ready, solution-type solutions you mentioned, and are partnering very actively with the sort of systems integrators who are doing the physical installation and have those relationships with the enterprises. And I think it’s just going to be very important for all of these players to come together in a very collaborative way to really unleash all these possibilities we’ve been talking about today.

Martin Garner: I absolutely agree. And I think the ecosystem angle is a really important theme to bring out here. Very few companies can do this on their own, and most depend on working successfully with others. There’s also an interesting organizational point I think for a lot of IoT suppliers. From what I can tell, and I haven’t done a big survey on this yet, but from what I can tell, most IoT suppliers are 80% engineers working on the product and 20% other, which includes HR, marketing, sales, and so on and so on. I kind of think it needs to be the other way around. They need to have a big customer engagement group in there, where if you’re in healthcare, you employ ex-nurses and ex-doctors and what have you, who really understand what’s going on within the customer organizations and who feed that back into the product. And I think most IoT suppliers haven’t really got to that yet, but it’s something we see coming before too long.

Kenton Williston: Absolutely. So with that, Martin, I really want to thank you for your time and your insights today. This has been a really fascinating conversation.

Martin Garner: Well, and thank you too. And thank you to Intel for hosting this and for having me along. It’s always a pleasure dealing with you guys, and I hope it’s been an interesting session.

Kenton Williston: And thanks to our listeners for joining us. To keep up with the latest from CCS Insight, follow them on Twitter at @CCSInsight, and on LinkedIn at CCS-Insight.

If you enjoyed listening, please support us by subscribing and rating us on your favorite podcast app. This has been the IoT Chat. We’ll be back next time with more ideas from industry leaders at the forefront of IoT design. 

Smart Digital Signage Powers Up EV Charging Stations

As consumers and governments push for a cleaner, greener environment, sales of electric vehicles (EVs) are soaring and automakers are retooling factories to ramp up supply. What’s missing from the picture is charging stations. There aren’t nearly enough to meet the coming demand, and concerns about setup costs and profitability have made many business owners reluctant to install them.

Adding smart digital signage to charging kiosks resets the business model, allowing companies to recoup their costs while learning more about customers and increasing sales of other products.

“With digital signs, the EV charger becomes the means to an end. As people are getting a charge, they watch streaming content that can make money for the business.” says Chris Northrup, Vice President of Digital Media and Networking Strategies, at USSI Global, a broadcast, network, and digital signage solution provider.

EV charging kiosks with digital signage can be used by many types of businesses—not just service stations.

“A kiosk can be any place where people can park for 20 or 30 minutes,” Northrup says. “Quick-serve restaurants, supermarkets, shopping centers, movie theaters, hotels, theme parks—all are good candidates.”

The Key to Success: A Computer Vision System

The USSI Global EV charging kiosks are shaped like gas pumps, with 55-inch, attention-getting digital screens. The color display is designed to remain vivid even in bright sunlight.

But what really makes the screens effective is the computer vision (CV) technology behind them. A pinhole-sized, CV-enabled digital camera embedded in the screen collects footage of charging customers and passersby. AI algorithms running on Intel® processors analyze this information in real time, determining gender, relative age, and mood—and for charging customers, the type of vehicle they’re driving.

To maintain customer privacy, facial images are not stored on computers—only the digital information about them is collected and processed.

The algorithms then trigger sign content likely to appeal to individuals or groups watching the screen. For example, it might show Tesla accessories to a Tesla owner. Others may see demographic-based information about health or fashion products. The system measures how long people watch and whether they turn away, quickly changing content that isn’t deemed effective to something more suitable.

“The signs are smart enough to start playing more of the kind of content that has caught a user’s attention. So if someone is drawn to sports, it will start showing more Nike ads,” says Amanda Flynn, USSI Global Vice President of Customer Relations and Business Development.

Companies can also use the screens to entice viewers into their premises with on-the-spot promotions, such as offering free coffee with the purchase of a food item. “Customers come back out, sit in their car, and eat and drink what they just bought while they’re waiting for a charge,” Northrup says.

Digital promotions can be scheduled in advance. For example, a charging station operator can arrange to run a New Year’s special and have the content automatically return to normal the next day. Operators control content delivery remotely and can select content for multiple screens in different locations with the press of a button.

As the need for #ChargingStations grows, enhancing them with #DigitalSigns could provide the incentive operators need to fill the demand. @USSI_1985 via @insightdottech

Smart Digital Signage Increases Profitability

Charging can take 20 to 30 minutes or more, giving businesses plenty of time to display money-making ads to a captive audience. But the content doesn’t have to be all advertisements. USSI Global is working with broadcast networks to incorporate television programming, which could range from cooking and home improvement shows to live news and local sports coverage.

“Maybe in Georgia you’re playing a Georgia Bulldogs game, and in Alabama you’re showing the Crimson Tide,” Flynn says. The large screens can also be divided to simultaneously show programs and related ads, such as for team merchandise.

Over time, analytics will reveal trends about people who frequent the charging station and the surrounding area. That will enable companies to create even more effective content for their signs and adjust the menus or products in their adjacent businesses to better suit customers, boosting profitability.

The combination of advertising and increased business volume will help charging station operators recoup setup costs quickly and cover the expense of providing a charge, Northrup says: “The charging can be free because the revenue generated from the content you display offsets the cost.”

Free service is a competitive advantage that will draw more charging customers, who may also spend money at the business. With additional eyeballs viewing the ads, advertisers may also pay operators more to display them.

Getting Started with Charging Stations and Digital Signage Displays

For businesses that would like to deploy charging stations, USSI Global provides a total service model from product to permitting and installation to infrastructure. It also provides post-installation service, fixing problems such as a disrupted internet connection, a failed screen, or a kiosk that gets bumped by a vehicle.

In addition, the company collects and processes data from the digital signs and sends the information to charging station owners, who can use it to create content and settings, including adjusting the parameters for ad changes. While some companies produce their own content, others rely on third-party providers or work with USSI Global, which has partnerships with content providers.

A Cleaner, Brighter Future

As the need for charging stations grows, enhancing them with digital signs could provide the incentive operators need to fill the demand. “I think you’ll see more and more businesses with two or three of them in front of their place,” Northrup says.

And as AI becomes more sophisticated, it will lead to deeper and more valuable customer insights.

“AI started out giving answers to yes-or-no questions and now it measures demographics and mood. Capabilities will become greater over time, enabling more complex decisions about content triggering,” Northrup says. For charging stations with AI-enabled digital signs, that means one thing: “There’s nowhere to go but up.”

 

This article was edited by Georganne Benesch, Associate Content Director for insight.tech.

AI-Powered Retail Digital Signage Transforms Superstores

If you shop for groceries in a superstore, you know it can be overwhelming. Endless aisles and options to choose from. Do you make a beeline to the items you need or wander around looking for the best deals?

The retailers who operate these stores want to know how you shop. Running on paper-thin margins, they need to optimize their marketing strategies and practices to improve the bottom line. But traditional in-store methods—from taste testing, to flyers, to static signage—aren’t doing the trick. And the benefits of online shopping data analytics aren’t available in street-side retail.

That’s why innovative businesses are transforming their digital signage displays into smart retail solutions with the latest artificial intelligence and computer vision technologies.

AI-powered retail digital signage offers high-value information that store managers have not had in the past. This ranges from knowing which advertisements are the most eye-catching, where shoppers dwell, and which areas have the highest traffic flow. And perhaps most important is real-time data about customer demographics such as their age range and gender. All these factors allow content to be tailored on the spot while monitoring trends over time.

“With the help of computer vision and edge AI computing technology, retailers can review their marketing effectiveness with a bigger scope,” says Kim Huang, Sales Manager at NEXCOM, a global leader in IOT solution development. “They can evaluate return on investment, do revenue comparisons before and after a certain marketing campaign, and quickly optimize the advertisement accordingly.”

Edge AI Power in Action

One of the largest supermarket chains in Asia needed an economical way to understand customer behavior and implement more targeted marketing efforts. The company worked with NEXCOM, deploying its AI Precision Marketing solution with 2,000 digital display screens across 200 stores.

The solution makes it possible to measure anonymously how long a shopper engages with an advertisement, demographics, and what kinds of products held their interest or went into their shopping cart. Not only did the retailer increase sales 30% over one year, but they also gained a new revenue stream by selling brand advertising.

“With the help of #ComputerVision and #edge #AI computing technology, #retailers can review their #marketing effectiveness with a bigger scope.”—Kim Huang, Sales Manager, @NEXCOMUSA via @insightdottech

The client specifically required a stable, fanless system that could run video cameras 24/7. The heart of the platform is the AIEdge-X® 100, which drives the content for two back-to-back digital displays while simultaneously handling audience measurement via two independent cameras. In this case, the retailer has 10 screens in each store.

“The software integrated inside this hardware doesn’t just work as a digital signage player, there is also the audience measurement in the background,” says Huang. “All the data is processed at the edge, and then uploaded to the cloud server for generating different kinds of reports to help the business owner make better decisions.”

The AIEdge-X® 100 is powered by an Intel® Celeron® processor and the NEXCOM AIBooster-X2 deep-learning accelerator card, which includes two Intel® Movidius Myriad X VPUs. This processing power makes the simultaneous operation of two cameras possible. The edge gateway also includes the Intel® OpenVINO Toolkit, and third-party 3D software for anonymized facial recognition.

“To do the computer vision at the edge you need quite a high-power computer system,” says Huang. “In this case, the Celeron processor combined with our adapted Movidius VPU provided the performance required.”

What’s next for the company? With more than 1,000 grocery stores, the success of this project has the retailer planning to roll out the AI Precision Marketing solution in another 200 locations over the next year.

Smart Digital Signage Provides Data You Can Count On

Big data analytics also provides a wide range of other business benefits. Marketing efforts can be tailored to the time of day and the type of customers coming into the store. For example, you might run a hot pizza and cold beer promotion after 6 p.m. targeted to office workers. Or a special on fresh fish and bread right out of the oven in the afternoon for stay-at-home parents.

Analyzing the purchasing habits of customer profiles informs messaging and content design. And all this dynamic content can be pushed out from a central management control system—to all stores or just one.

And there’s another significant advantage that you might not expect from a smart digital signage solution. Continued AI-enabled data collection can improve supermarket operations and cost savings. Ongoing information on purchasing patterns informs supply and demand forecasts so the right products are on the right shelves at the right time.

New Opportunities for Smart Retail Systems Integrators

For systems integrators (SIs), the AI Precision Marketing Solution is up and running almost like an off-the-shelf product. Primarily the SI only needs to make sure the client has an internet-ready network connection.

“Upon arrival of the equipment, the integrator only needs to pop in the camera, connect the screens to AIEdge-X® 100, and connect the AIEdge-X® 100 to the VPN router,” Huang says. “All data sent from the edge to the cloud goes through a VPN channel for security. After that, they adjust the camera angle, and the system is ready to go right to work.”

SIs that may have deep experience in serving retail customers but lack AI development skills now have a solution that offers new opportunities with existing and new customers, Huang explains.

Smart Retail Has a Bright Future

The future of AI and vision in supermarkets seems almost endless. The more retailers know about their customers, the better they can serve them with exceptional shopping experiences. And when digital displays are interactive, information becomes a two-way street.

People want to know more about the food they purchase: where their produce was grown, healthy food options, price comparisons, and much more. The latest innovations in AI, CV, and digital-signage displays are making these use cases a reality today and into the future.

 

This article was edited by Christina Cardoza, Senior Editor of insight.tech.

Democratizing AI for All with Plainsight and Intel®

Elizabeth Spears & Bridget Martin

[podcast player]

When you think about AI, you don’t typically think about agriculture. But imagine how much easier farmers’ lives would be if they could use computer vision to track livestock or detect pests in their fields.

Just one problem: How can an enterprise leverage AI if they don’t already have a team of data scientists? This is a pressing question not only in agriculture but also in a wide range of industrial businesses, such as manufacturing and logistics. After all, data scientists are in short supply!

In this podcast, we explore how companies can deploy computer vision with their existing staff—no expensive hiring or extensive training required. We explain how to democratize AI so non-experts can use it, the possibilities that come from making AI more accessible, and unexpected ways AI transforms a range of industries.

Our Guests: Plainsight and Intel®

Our guests this episode are Elizabeth Spears, Co-Founder and Chief Product Officer for Plainsight, a machine learning lifecycle management provider for AIoT platforms, and Bridget Martin, Director of Industrial AI & Analytics of the Internet of Things Group at Intel®.

In her current role, Elizabeth works on innovating Plainsight’s end-to-end, no-code computer vision platform. She spends most of her time focusing on products offered by Plainsight, particularly thinking of what new products to build, what order to build them in, and why they are needed.

Bridget focuses on building up the knowledge and understanding that occur during the process of adopting AI, especially in an industrial space. Whether it is manufacturing or critical infrastructure, Bridget and her team at Intel® spend their time working to develop solutions that address the challenges of incorporating AI into an industrial ecosystem.

Podcast Topics

Elizabeth and Bridget answer our questions about:

  • (2:19) Plainsight’s rebranding and evolution from Sixgill
  • (7:32 ) The rapid evolution of AI and computer vision
  • (10:08) The unexpected use cases coming from advancements of AI
  • (13:33) How companies can help make AI more accessible
  • (16:07)The biggest challenges industries face when adopting AI
  • (18:31) How to get organizations to start thinking differently about AI
  • (21:30) The benefits of democratizing AI and computer vision
  • (23:50) How organizations can best get started with AI

Related Content

To learn more about the future of democratizing AI, read Build ML Models with a No-Code Platform. For the latest innovations from Plainsight, follow them on Twitter at @PlainsightAI and on LinkedIn at Plainsight.

 

Transcript edited by Christina Cardoza, Senior Editor for insight.tech.

 

Apple Podcasts  Spotify  Google Podcasts  

Transcript

Kenton Williston: Welcome to the IoT Chat, where we explore the trends that matter for consultants, systems integrators, and enterprises. I’m Kenton Williston, the Editor-in-Chief of insight.tech. Every episode we talk to leading experts about the latest developments in the Internet of Things. Today I’m discussing the democratization of AI with Elizabeth Spears, Co-Founder and Chief Product Officer at Plainsight, and Bridget Martin, Director of Industrial AI and Analytics of the Internet of Things Group at Intel®.

AI already has a solid track record in manufacturing. But, as the technology constantly advances, it’s turning up in all kinds of rough-and-ready use cases. For example, AI is now being used to count cows! But AI is useless if no one understands how to use it, right? And it’s not very often you find data scientists on a farm.

So, in this podcast I want to explore the possibilities for AI in all kinds of rugged use cases—not just in agriculture, but across the industrial sector. We’ll discuss the importance of making AI more accessible, and the new and exciting use cases that come from its democratization. But before we get started, let me introduce our guests. Elizabeth, I’ll start with you. Welcome to the show.

Elizabeth Spears: Hi, thank you for having me. I’m excited to chat with you today.

Kenton Williston: Likewise. And can you tell me about Plainsight, and your role there?

Elizabeth Spears: Sure. So, here at Plainsight we have an end-to-end, no-code computer vision platform. So, it allows both large and small organizations to go from data organization, to data annotation, to training a machine learning model or a computer vision model, and deploying. So, deploying it on-prem, on the edge, or almost anywhere in between, and then being able to monitor all of your computer vision deployment in a single pane of glass. My role is the Co-Founder and Chief Product Officer. So, basically everything around what we build, in what order, and why, is really where I spend most of my time—along with my amazing team.

Kenton Williston: I am really looking forward to hearing about all the details there. That sounds very, very interesting. And one thing I’m curious about upfront, though, is I had known your company as Sixgill, and I’m wondering why it’s been rebranded to Plainsight, and what that has to do with the company’s evolution.

Elizabeth Spears: Yeah, good question. So, like a true product-focused company, we listened to what our customers wanted and needed. And we basically took a transformational turn from an IoT platform to a computer vision platform. So, what we kept hearing from our customers was that they wanted more and more AI, and then, specifically, more computer vision. So we took this foundation that we had of a platform—an IoT platform that was used for high-throughput enterprise situations—and we made it specialized for both large and small companies to be able to build and manage their computer vision solutions, really 10x faster than most of the other available solutions out there. So, we’re talking about kind of implementing the same use case with even higher accuracy in sort of hours instead of months. And that’s really been our focus.

So, the name—the rebrand for the name Plainsight—really came from this “aha” moment that we have with our customers, where they often have thousands of hours of video or image data that’s really this untapped resource in the enterprise. And when we start talking to them about how the platform works, and all the big and small ways that data can provide value to them, they all of a sudden kind of get it. It’s almost like everything that I can see—if I sat there and watched it without blinking—all of that could actually just be identified and analyzed automatically. So they have this “aha” moment that we talk about as sort of the elephant in the room, which is—the elephant is our icon—where you start to understand how computer vision works, and you just can’t unsee all the places that it can be applied. So we’re bringing all of that value into Plainsight for our customers, and that’s where the name came from. Our icon, like I said, is that elephant that we’ve all really bonded to named Seymour, and he’s named that because he can “see more.” He can help see more in all that visual data.

Kenton Williston: Oh boy. So, I have to say, I have a well-earned reputation for being the dad-jokes guy, and I think I would fit right in.

Elizabeth Spears: Yeah. We were very pleased with that one internally.

Kenton Williston: Yeah. So, the evolution—that’s a really great story, and I think is reflective of where so much technology is going right now, and how central AI and computer vision in particular have become just everywhere. And I’m really excited to hear more from your perspective, as well as from Bridget’s perspective. So, Bridget, I’d like to say welcome to you as well.

Bridget Martin: Yeah. Thank you for having me. Super excited to be here.

Kenton Williston: So, tell me a little bit more about your role at Intel.

Bridget Martin: Well, so at Intel, obviously everybody really knows us for manufacturing chips, right? That is absolutely Intel’s bread and butter, but what I loved hearing Elizabeth talk about just now is the real need to be connected to and understand the ultimate consumers of these solutions, and ultimately of this technology. And so the main function of my team is really to have and build up that knowledge and understanding of the pain points that are occurring in the process of adopting AI technology in the industrial space—whether it’s manufacturing or critical infrastructure—and really working with the ecosystem to develop solutions that help address those pain points. Ultimately, in top of mind for me is really being around the complexity that it is to deploy these AI solutions. LikeElizabeth was saying, there’s such great opportunity for capabilities like computer vision in these spaces, but it’s still a really complex technology. And so, again, partnering with the ecosystem—whether it’s systems integrators or software vendors—to help deploy into the end-manufacturer space, so that they can ultimately take advantage of this exciting technology.

Kenton Williston: Yeah. And I want to come back to some of those pain points, because I think they’re really important. I think what both your organizations are doing is really valuable to solving those challenges. And I should also mention, before we get further into the conversation, that the insight.tech program as a whole and this IoT chat podcast are Intel publications. So that’s why we’ve gathered you here today. But, in any case, while those challenges are very much something I want to talk to you about, I think it’s worth doing some framing of where this is all going by talking about what’s happening with the applications. Because, like Elizabeth was just saying, we’re at a point where, if you can see it—just about anything you can see, especially in an industrial context—there’s something you can do with that data from an AI–computer vision point of view. And, Bridget, I’m interested in hearing what you are seeing in terms of new applications that you couldn’t do five years ago, a year ago, six months ago. Everything’s moving so fast. Where do things stand right now?

Bridget Martin: Yeah. Well, let’s kind of baseline in where we’re ultimately trying to go, right? Which is the concept of Industry 4.0, which is essentially this idea around being able to have flexible and autonomous manufacturing capabilities. And so, if we rewind five, ten years ago, you have some manufacturers that are what we would consider more mature manufacturing applications. And so those are scenarios where you already see some automated compute machines existing on the factory floor—which are going to be, again, automating processes but also, most critically when we’re talking about AI, outputting data—whether it’s the metadata of the sensors, or the processes that that automated tool is performing. But then you also have a significant portion of the world that is still doing a lot of manual manufacturing applications.

And so we really have to look at it from these two different perspectives. Where the more mature manufacturing applications that have some automation in pockets, or in individual processes within the manufacturing floor space—they’re really looking to take advantage of that data that’s already being generated. And this is where we’re seeing an increase in predictive maintenance-type applications and usages—where they’re wanting to be able to access that data and predict and avoid unplanned downtime for those automated tools. But then when we’re looking at those less mature markets, they’re wanting to skip some of these automation phases—going from an Industry 2.0 level and skipping right into Industry 3.0 and 4.0 through the use of leveraging computer vision, and enabling now their factory to start to have some of the same capabilities that we humans do, and where they’re, again, deploying these cameras to identify opportunities to improve their overall factory production and the workflow of the widgets going through the supply chain within their factory.

Kenton Williston: Yeah. I think that’s very, very true, everything you’ve said. And I think one of the things that’s been interesting to me is just seeing that it’s not just the proliferation of this technology, but it’s going into completely new applications. The use cases are just so much more varied now, right? It’s not just inspecting parts for defects, but, like Elizabeth was saying, basically anything that you could point a camera at, there’s something you can do with that data now. And so, Elizabeth, I’d love to hear some more examples of what you were seeing there. And is it really just the manufacturing space? Or is it a wider sphere of applications in the rugged industrial space where you’re seeing all kinds of new things crop up?

Elizabeth Spears: It’s really horizontal across industries. We see a lot of cases in a lot of different verticals, so I’ll go through some of the fun examples and then some of my favorites. So, one of the ones that is really cool, that’s sort of just possible, is super resolution—a method called super resolution. And one of the places it’s being used, or they’re researching using it, is for less radiation in CT scans. So, basically what this method does is, if you think of all of those FBI investigation movies, where they’re looking for a suspect and there’s some grainy image of a license plate or a person’s face, and the investigator says, “Enhance that image.” Right? And so then all of a sudden it’s made into this sharp image and they know who did the crime, or whatever it is. That technology absolutely did not exist most of the time that those types of things were being shown. And so now it really does. So that’s one cool one.

Another one is simulated environments for training. So, there’s cases where the data itself is hard to get, right? So, things like rare events, like car crashes. Or if you think about gun detection, you want your models around these things to be really accurate, but it’s hard to get data to train your models with. So just like in a video game, where you have a simulated environment, you can do the same thing to create data. And people like Tesla are using this for crash detection, like I mentioned, and we’re using it as well for projects internally. My favorite cases are just the really practical cases that give an organization quick wins around computer vision, and they can be small cases that provide really high value. So, one that we’ve worked on is just counting cattle accurately, and that represents tens of millions of dollars in savings for a company that we’re working with. And then there’s more in agriculture—where you can monitor pests. And so you can see if you have a pest situation in your fields and what you can do about it. Or even looking at bruising in your fruit—things like that. So, it’s really across industries, and there’s so much, well, low-hanging fruit, as we were talking about agriculture, where you can really build on quick wins in an organization.

Kenton Williston: It’s just all over the place, right? Anything that you can think of that might fall into that category of an industrial, rugged kind of application, there’s all kinds of interesting new use cases cropping up. And one of the things that I think is really noteworthy here is a lot of these emerging applications, like in the agricultural sector, are in places where you don’t traditionally think of there being organizations with data science teams or anything like that. Now, I will say a little aside here, that sometimes people think of a farming as being low tech, but really it’s not. People have been using a lot of technology in a lot of ways for a long time, but nonetheless, this is still an industry that’s not typically thought of as being a super high-tech industry, and certainly not one where you would expect to find data scientists. Which leads me to the question of how can organizations like this, first of all, realize that they have use cases for computer vision? And, second of all, actually do something to take advantage of those opportunities. So, Elizabeth, I’ll toss that over to you first.

Elizabeth Spears: Yeah. So this is kind of why we built the platform the way we did. First, hiring machine learning and data science talent is really difficult right now. And then, even if you do have those big teams, building out an end-to-end platform to be able to build these models, train them, monitor them, deploy them, and keep them up to date, and kind of the continuous training that many of these models require to stay accurate—it requires a lot of different types of engineers, right? You need the site-reliability guys. You need the big data guys. You need a big team there. So it’s a huge undertaking if you don’t have a tool for it. So that’s why we built this platform end-to-end, so that it would make it more accessible and simpler for organizations to just be able to adopt it. And, like I was saying, I feel like often we talk about AI as: the organization has to go through a huge AI transformation, and it has to be this gigantic investment, and time, and money. But what we find is that when you can implement solutions in weeks, you get these quick wins, and then that is really what starts to build value.

Kenton Williston: Yeah, that’s really interesting. And I think the general trend here is toward making the awareness of what computer vision can do for an organization so much more widespread, and getting people thinking about things differently. And then I think where a lot of folks are running into trouble is that, “Okay, we’ve got an idea. How do we actually do something with that?” And I think tools like Plainsight are a critical, critical part of that. But I know Intel’s also doing a lot of work to democratize AI. And, Bridget, I’d love to hear from your point of view what some of the biggest challenges are, and what Intel’s doing to address those challenges and make these capabilities more broadly available.

Bridget Martin: Yeah. I mean, like I was saying toward the beginning, complexity is absolutely the biggest barrier to adoption when we’re talking about AI in any sort of industrial application and scenario. And a lot of that is to some of the points that yourself and Elizabeth were making around the fact that data scientists are few and far between. They’re extremely expensive in most cases. And in order to really unleash the power of this technology, this concept of democratizing it and enabling those farmers themselves to be able to create these AI-training pipelines and models, and do that workflow that Elizabeth was describing as far as deploying them and retraining and keeping them up to date—that’s going to be the ultimate holy grail, I think, for this technology, and really puts it in that position where we’re going to start seeing some significant, world-changing capabilities here.

And so of course that’s, again, top of mind for me as we’re trying to enable this concept of Industry 4.0. And so Intel is doing a multitude of things in this space. Whether it’s through our efforts like Edge Insights for industrial, where we’re trying to help stitch together this end-to-end pipeline and really give that blueprint to the ecosystem of how they can create these solutions. Or it’s even down to configuration-deployment tools, where we’re trying to aid systems integrators on how they can more easily install a camera, determine what resolution that needs to be on, help fine-tune the lighting conditions—because these are all factors that greatly impact the training pipeline and the models that ultimately get produced. And so being able to enable deployment into those unique scenarios and lowering the complexity that it takes to deploy them—that’s ultimately what we’re trying to achieve.

Kenton Williston: Yeah, absolutely. One thing that strikes me here is that there is a bit of a shift in mindset that I think is required, right? So, what I’m thinking about here is that I think in large part—because of the complexity that has traditionally been associated with AI and computer vision, and when organizations are thinking about what they can do with their data—I think oftentimes there’s kind of a top-down, “let’s look for some big thing that we can attack, because this is going to require a lot of effort and a lot of investment for us to do anything with this technology.” And I think there are certainly going to be cases where that approach makes sense. But I think there are a lot of other cases, like we’ve been talking about, and you’ve got all these very niched, specialized scenarios, where really the way that makes sense to do it is to just solve these small, low-hanging fruit problems one at a time, and build up toward more of an organization-wide adoption of computer vision. So, Elizabeth, I’d like to hear how you’re approaching that with your customers—what kind of story, how you’re bringing them this kind of “aha” moment, and what gets them to think a little bit differently about how they can deploy this technology.

Elizabeth Spears: Yeah. And I want to take a second just to really agree with Bridget there on how challenging and interesting some of the on-the-ground, real-world things that come up with these deployments are, right? So, it’s like putting up those cameras and the lighting, like Bridget was saying, but then things come up—like all of a sudden there’s snow, and no one trained for snow. Or there’s flies, or kind of all of these things that will come up in the real world. So, anyway, that was just an aside of what makes these deployments fun and keeps you on your toes. It’s really about expanding AI through accessibility, for us. AI isn’t for the top five largest companies in the world, right? We want to make it accessible not just through simplified tools, but also simplified best practices, right? So, when you can bake some of those best practices into the platform itself, companies and different departments within companies have a lot more confidence using the technology. So, like you’re saying, we do a lot of education in our conversations, and we talk to a lot of different departments. So we’re not just talking to data scientists. We like to really dig into what our customers need, and then be able to talk through how the technology can be applied.

Kenton Williston: To me, a lot of what I’m hearing here is you’ve actually got a very different set of tools today, and it requires a different way of thinking about your operations. Because you’ve got all these new tools and because they’re available to such a wider array of use, there are a lot of different ways you can go after the business challenges that you’ve got. And, Bridget, this brings me to a question—kind of a big-picture question: what do you see as the benefits of democratizing AI and computer vision in this way, and making these sorts of capabilities available to folks who are expert in the areas of work, but not necessarily experts in machine learning and computer vision and all the rest?

Bridget Martin: Oh my goodness, it’s going to be huge. When we’re talking about what I would call a subject-matter expert, and really putting these tools in their hands to get us out of this cycle where it used to have to be, again—taking that quality-inspection use case—something that we can all kind of baseline on: you have a factory operator who would typically be sitting there manually inspecting each of the parts going through. And when you’re in the process of automating that type of scenario, that factory operator needs to be in constant communication with the data scientist who is developing the model so that that data scientist can ensure that the data that they’re using to train their model is labeled correctly. So now think if you’re able to take out multiple steps in that process, and you’re able to enable that factory operator or that subject-matter expert with the ability to label that data themselves—the ability to create a training pipeline themselves. These all sound like crazy ideas—enabling non–data scientists to have that function—but that’s exactly the kind of tooling that we need in order to actually properly democratize AI.

And we’re going to start to see use cases that myself or Elizabeth or the plethora of data scientists that are out there have never thought about before. Because when you start to put these tools in the hands of people and they start to think of new creative ways to apply those tools to build new things—this is what I was talking about earlier—this is when we’re really going to see a significant increase, and really an explosion of AI technologies, and the power that we’re going to be able to see from it.

Kenton Williston: Yeah. I agree. And it’s really exciting even just to see how far things have come. Like I said, you don’t have to go back very far—six months, a year—and things are really, really different already. I can barely even picture where things might go next. Just, everything is happening so fast, and it’s very, very exciting. But this does lead me to, I think, a big question. Which is, well, where do organizations get started, right? This is so fast moving that it can seem, I’m sure, overwhelming to a lot of organizations to even know where to begin their journey. So, Elizabeth, where do you recommend the company start?

Elizabeth Spears: Yeah. So, I mean, there are so many great resources out there on the internet now, and courses, and a lot of companies doing webinars and things like that. Here at Plainsight we have a whole learning section on our website, that has an events page. And so we do a lot of intro-to-computer-vision-type events, and it’s both for beginners, but also we have events for experts, so they can see how to use the platform and how they can speed up their process and have more reliable deployments. We really like being partners with our customers, right? So we research what they’re working on. We find other products that might apply as well. And we like kind of going hand in hand and really taking them from idea, all the way to a solution that’s production ready and really works for their organization.

Kenton Williston: That makes a lot of sense. And I know, Bridget, that was a lot of what you were talking about in terms of how Intel is working through its ecosystem. Sounds like there’s a lot of work you’re doing to enable your partners and, I imagine, even some of your end users and customers. Can you tell me a little bit more about the way that that looks in practice?

Bridget Martin: Yeah, absolutely. So, one of my favorite ways of approaching this sounds very similar to Elizabeth really partnering with that end customer—understanding what they’re ultimately trying to achieve, and then working your way backward through that. So, this is where we pull in our ecosystem partners to help fill those individual gaps between where the company is today and where they’re wanting to go. And this is one of the great things about AI—is what I like to call a bolt-on workload—where you’re not having to take down your entire manufacturing process in order to start dabbling or playing with AI. And it’s starting to discover the potential benefit that it can have for your company and your ultimate operations. It’s relatively uninvasive to deploy a camera and some lighting and point it at a tool or a process—versus having to bring down an entire tool and replace it with a brand new, very large piece of equipment. And so that really is going to be one of the best ways to get started. And we of course have all kinds of ecosystem partners and players that we can recommend to those end customers, who really specialize in the different areas that they’re either wanting to get to or that they’re experiencing some pain points in.

Kenton Williston: So you’re raising a number of really interesting points here. One is, I love this idea of the additive workload, and very much agree with that, right? I think that’s one of the things that makes this whole field of AI—but particularly computer vision—so incredibly powerful. And the other thing that I think is really interesting about all of this is because there are so many point use cases where you can easily add value by just inserting a camera and some lighting somewhere into whatever process you’re doing, I think it makes this a sort of uniquely easy opportunity to do sort of proofs of concept—demonstrate the value, even on a fairly limited use case, and then scale up. But this does lead me to a question about that scaling, right? While it’s great to solve a bunch of little point use cases, at some point you’re going to want to tie things together, level things up. And so I’d be interested in hearing, Elizabeth, how Plainsight views this scaling problem. And I’m also going to be interested in hearing about how Intel technology impacts the scalability of these solutions.

Elizabeth Spears: We’re looking at scale from the start, because, really, the customers that we started with have big use cases with a lot of data. And then the other way that you can look at scale is spreading it through the organization. And I think that really comes back to educating more people in the organization that they can really do this, right? Especially in things like agriculture—someone who’s in charge of a specific field or site or something like that may or may not know all the places that they can use computer vision. And so what we’ve done a lot of is we’ll talk to specific departments within a company. And then they say, “Oh, I have a colleague in this other department that has another problem. Would it work for that?” And then it kind of spreads that way, and we can talk through how those things work. So I think there’s a lot of education in getting this to scale for organizations.

Kenton Williston: And how is Intel technology, and your relationship with Intel more broadly, helping you bring all these solutions to all these different applications?

Elizabeth Spears: They’re really amazing with their partners, and bringing their partners together to give enterprises really great solutions. And not only with their hardware—but definitely their hardware is one of the places that we utilize them, because we’re just a software solution, right? And so we really need those partners to be able to provide the rest of the full package, to be able to get a customer to their complete solution.

Kenton Williston: Makes sense. We’re getting close to the end of our time together, so I want to spend a little bit of time here just kind of looking forward and thinking about where things are going to go from here. Bridget, where do you see some of the most exciting opportunities emerging for computer vision?

Bridget Martin: Elizabeth was just touching on this at the end, and when we’re talking about this concept of scalability, it’s not just scaling to different use cases, but we also need to be enabling the ability to scale to different hardware. There’s no realistic scenario where there is just one type of compute device in a particular scenario. It’s always going to be heterogeneous. And so this concept—and one of the big initiatives that Intel is driving around oneAPI and “Write once. Deploy anywhere”—I think is going to be extremely influential and help really transform the different industries that are going to be leveraging AI. But then, also, I think what’s really exciting coming down the line is this move, again, more toward democratization of AI, and enabling that subject-matter expert with either low-code or no-code tooling—really enabling people who don’t necessarily have a PhD or specialized education in AI or machine learning to still take advantage of that technology.

Kenton Williston: Yeah, absolutely. So, Elizabeth, what kind of last thoughts would you like to leave with our audience about the present and future of machine vision, and how they should be thinking about it differently?

Elizabeth Spears: I think I’m going to agree with Bridget here, and then add a little bit. I think it’s really about getting accessible tools into the hands of subject-matter experts and the end users, making it really simple to implement solutions quickly, and then being able to expand on that. And so, again, I think it’s less about really big AI transformations, and more about identifying all of these smaller use cases or building blocks that you can start doing really quickly, and over time make a really big difference in a business.

Kenton Williston: Fabulous. Well, I look forward very much to seeing how this all evolves. And with that, I just want to say, thank you, Elizabeth, for joining us today.

Elizabeth Spears: Yeah. Thank you so much for having me.

Kenton Williston: And Bridget, you as well. Really appreciate your time.

Bridget Martin: Of course. Pleasure to be here.

Kenton Williston: And thanks to our listeners for joining us. To keep up with the latest from Plainsight, follow them on Twitter at @PlainsightAI, and on LinkedIn at Plainsight.

If you enjoyed listening, please support us by subscribing and rating us on your favorite podcast app. This has been the IoT Chat. We’ll be back next time with more ideas from industry leaders at the forefront of IoT design.

Telemedicine Gets a Checkup from ViTel Net

As we learn to live in a COVID-inflected world, it’s clear that the way we experience healthcare has changed forever. Though in-person visits to the doctor, clinic, or hospital are once more a possibility, telemedicine isn’t going anywhere. But are patients, providers, and—perhaps most crucially—healthcare systems ready for this new reality?

Dr. Richard Bakalar, Chief Strategy Officer at ViTel Net, a provider of scalable virtual care solutions, has an impressive history with telemedicine. It was garnered during his experiences traveling internationally with the White House, caring for those affected by domestic natural disasters, and leading the Navy’s transition to telemedicine. He’ll talk with us about the lessons learned from pandemic telemedicine—and its challenges—and how the whole healthcare landscape can benefit going forward.

What is the value of telemedicine from both a patient and provider perspective?

Convenience is one big advantage, but a more important factor, I think, is getting the right data at the right time. One of the challenges with face-to-face care is that there is often a lag between when a patient requests care or is scheduled for care, and when a patient has the problem. In the telehealth sphere you can synchronize those times.

You can also provide context. The patient may be at home during the visit, and you may be able to see something in the background of the video, for example, that may show a compromised environment. That’s the sort of information that may not be available to a physician when the patient is seen in a clinic or hospital environment.

“The information generated by #VirtualVisits is going to be more and more critical to getting accurate analysis not only of #patients but of population #health.”–Dr. Richard Bakalar, CSO @Vitelnet via @insightdottech

So, more context, more timeliness, more convenience, even the ability to have more frequent evaluations—it all offers a lot of flexibility for optimizing the care schedule as well as the care environment. Sometimes face-to-face is superior when there’s a physical context required. But sometimes, like when timing is sensitive, then a virtual visit may be a better option.

What lessons have you learned throughout your long career with telemedicine?

When I migrated from the military into the private sector, I had the privilege of being the president of the American Telemedicine Association. And what we learned there is that a lot of organizations had telemedicine projects that were departmentally focused. And each of those projects created an independent proof of concept around how telemedicine could impact their care. What I learned early on is that you need more of a programmatic approach.

If you think about radiology, you don’t have a separate radiology division within each medical specialty: You have one radiology department that supports the entire continuum of care within the health system. Telemedicine could leverage that kind of a model, where we could take advantage of what’s available—from a protocol perspective, from a business perspective, even a technology-infrastructure perspective—and just change those minor things that need to be changed to adopt specialty modules on a single platform. And it doesn’t even have to be a telemedicine program—it could be an innovation program where telemedicine is one of the early use cases.

One of the lessons I learned early on was that governance needs to be centralized, technology needs to be centralized, and leadership needs to be top down to provide strategic support for the program—from a technical, administrative, and clinical perspective. But the innovation actually comes from the bottom up, from the end users in the field—in a hospital at the bedside, for instance. Innovation brought up from the bottom, and support coming from the top. And when you have that kind of multidisciplinary approach to governance, telemedicine can scale very nicely and can be very effective.

What is the challenge of implementing ad hoc telemedicine solutions?

It’s the challenge of using what I call an “app store approach” to telemedicine—where you have lots of different single applications that are not necessarily linked together. Data doesn’t flow between them, the workflow is not totally integrated, and the reporting is not necessarily normalized across those different applications.

But workflow and reporting need to be integrated. So having a platform with modules allows you to do that—with the reporting as well as the data capture. It also links back to the systems of records—such as the electronic health record, the PAC systems for images, and the business and related financial systems. That all needs to be in concert in order to provide the telemedicine service.

Why has the integration of telemedicine been so difficult in the healthcare industry?

There is a fragmented approach in the private sector. Each individual department has a separate project officer or a separate technology, and the data is all siloed. There’s also no business model yet for telemedicine in the healthcare industry, because reimbursement has traditionally been very limited.

So the challenge is to transform the governance, the technology infrastructure, the business-reimbursement models, the regulatory barriers that have been up for the past 10 or 15 years. Also to get adoption and acceptance by the patients, and—more important—by providers. Providers have been hesitant to adopt this capability because, before COVID, they were very busy with face-to-face care. With the arrival of COVID, they had to use the technology to be able to access their patients, and so they recognized the value of it.

Post-COVID, the issue is going to be that there are more patients than physicians have availability for. We still have problems with general access to healthcare, as well. So the question is how can limited resources—physician resources, ancillary health resources, other staff resources—be better utilized to provide better care to more people, more equitably, around the health system.

But I think there’s reason for optimism, because patients have seen the value. They use videoconferencing for work; they use video for entertainment. And so they say, “Why can’t I use it for my healthcare services?” So patients are going to demand better access to telehealth services going forward. And health systems are going to recognize that they’re understaffed in a lot of cases, and telehealth can be more efficient.

Then the payers have seen that telemedicine can actually save money for them in the long run—especially when it’s used for chronic conditions, or for high-cost services in the hospital health system. Episodes of care can be less expensive, even if individual encounters may be more expensive until the infrastructure has been scaled.

The key is that if you have multiple apps, it’s very expensive to maintain those interfaces. But if you have one platform that has multiple modules, maintaining that interface with the electronic health record and the data warehouses and the financial systems is much easier. That’s one of the things that organizations are going to have to make some investments in going forward.

How is ViTel Net helping to streamline and unify the electronic health record system?

There’s a lot of demand for organizations to modify electronic health record systems to support changing payment requirements and regulations. But, in the past, telemedicine has taken a backseat there. That’s been changing over the past year and a half, but it doesn’t change the fact that EHRs were primarily designed to be transactional systems; they were not designed to be customizable, configurable workflow engines—engines that can meet the demands of a remote visit.

What ViTel Net brings into play is agility. We can make very rapid changes in our platform, and then share the critical components with the transactional system. This happens both at the front and back ends—pulling in demographic and historical information, and then putting the summarized results of an encounter back into the electronic health record at the end of a transaction. This provides that continuity of care that’s needed in both face-to-face and virtual care. We help with the virtual visits, and provide videoconferencing and language processing—details that electronic health records are not suited to do, but that are required for virtual visits.

Is there a role for technologies like language processing in the telehealth domain?

AI technologies are important at even the most rudimentary level of language processing, particularly when you start having outreach to more diverse populations. Not everyone has English as a first language, and patients and their family members, as well as extended health networks, need to be able to communicate with the health system more effectively. So one of the things we’ve incorporated into the telehealth platform is language services, in video as well as audio, and in multiple languages.

One of the challenges is that all the information generated by the virtual visits of the past several years is missing from data warehouses, so you’re missing the opportunity to take advantage of it. Now, why isn’t that information in the data warehouse? Because most of those transactional systems—the electronic health records—don’t code for telehealth. In the past, it was a very small fraction of their business, kind of a rounding error, so to speak, of their business. And virtual visits were typically single events rather than continuity-care events, so it wasn’t a problem.

But now, as we move into delivering chronic care, the information generated by virtual visits is going to be more and more critical to getting accurate analysis not only of patients but of population health. And so the ability to code things properly, to be able to include them in the data warehouses, and to have a more comprehensive view of patients is going to be more critical going forward. And more accurate machine learning and artificial intelligence will be crucial to that.

There’s a great opportunity to use some of these new technologies, where the entertainment, retail, and financial industries have already done the heavy lifting, and we can leverage their experience with those capabilities in healthcare.

How can healthcare organizations set themselves up for success?

The good news is that telehealth is already on the third wave down this path of digital transformation. It started with PACS in the early 1990s, and then the electronic health record, and now telehealth platforms. One of the things that was learned with the first two waves is that you want to partner with an organization that’s going to co-invest. Are they going to share risks? Are they going to be reliable? Are they going to be innovative? And probably most important, are they going to provide the kind of support you need—not only for the initial implementation but also for the ongoing innovation, training, and support that’s going to be necessary to make that investment a value going forward.

I always like to ask the Why: “Why are you doing it?” Not so much the How. The How is actually very easy today, because technology is abundant and robust. Senior leadership needs to define the objectives, the goals—the Why of using telemedicine for their organization at that particular time. And then, how do they want to leverage it going forward? So that’s step number one, that governance piece.

The second step is to assemble a multidisciplinary team, so that you have the representation of not only the technologists but also the operational folks who have to fund and support the project from an investment and business-model perspective. And then the clinicians need to be on board, so that they can tell you what’s practical, and what’s needed, and where the pain points are.

And I always recognize that telemedicine is not a technology; it’s a service. That’s an important concept that organizations need to think about as they grow their programs. All the capabilities that you need for face-to-face care need to be available in the telemedicine sphere as well.

Related Content

To learn more about the future of telehealth, listen to our podcast Virtual and In-Person Care Come Together with ViTel Net and read Telehealth Is the Future of Care, and the Future Is Now. For the latest innovations from ViTel Net, follow them on Twitter at @ViTelNet and on LinkedIn at ViTel Net.

 

This article was edited by Christina Cardoza, Senior Editor for insight.tech.

Edge AI, Powerful Compute Cut Supply Chain Gridlock

From toilet paper shortages to skyrocketing lumber costs, COVID-19 exposed supply chain weaknesses almost immediately. These disruptions have caused cascading issues across industrial supply chains ranging from product delays and abrupt price increases to an inability to conduct business in certain sectors.

Logistics companies usually keep backups from expanding, but with global shutdowns preventing raw materials from being extracted and goods from being manufactured, there has been little they could do. However, one thing these organizations can control as we attempt to return to pre-COVID inventory levels is the efficiency with which goods are transported from loading dock to retail warehouses.

For example, rather than moving materials as soon as they are available, supply chain digital transformation could cut costs and balance inventory by only sending shipments once transport vehicles have reached 100% capacity.

This is easier said than done because it means someone must constantly monitor shipping containers and delivery trucks for available space, then communicate any vacancies to transport and operations managers. But now by combining computer vision AI with supply chain management, that people resources can be bolstered by IoT tech.

Digitized Supply Chain Management Yields Efficiencies

At ports around the world, transport trucks enter through gates where Port Authority personnel record the origin, destination, and serial number of shipping containers for tracking purposes.

To eliminate traffic jams and the potential for human error in this process, many ports are installing optical character recognition (OCR) systems at the gates that automate container check-in. But these computer vision-based systems are capable of much more.

In checkpoint-based operations like this, AI trained to detect free space is a game changer. The vision AI uses existing cameras to identify reference points in images, determine the space utilization of a given container, then report those findings to logistics managers. When combined serial number tracking operators can quickly pinpoint available capacity in their fleet.

Going a step further, computer vision systems can also be used to identify the volumetric properties of goods and assist in pallet dimensioning. As their names imply, these solutions measure the physical properties of goods and packages as they progress through the manufacturing and distribution chain.

Whereas these systems once required specialized scanners, modern AI can detect item length, width, height, and other physical characteristics with standard cameras to lower implementation costs and simplify integration. But the real logistical power here lies in combining this type of data with transport capacity information so the maximum merchandise can be packed into shipping containers.

Logistics Operators’ Eye at the Edge

The above provides just two examples of how AI-enabled computer vision can maximize the efficiency of logistics operations. But there are many more use cases that leverage the technology including dock occupancy monitoring, wrong place detection, and automated robots that handle, load, and unload freight.

None of these applications are possible without edge computing that can execute advanced AI algorithms in real time. This has been a real challenge due to performance, power, and cost limitations in existing solutions. Avnet Embedded—a leader in embedded compute and software solutions—is making advanced edge AI a reality with its MSC C6C-TLU, based on 11th generation Intel® Core processors.

The MSC C6C-TLU is a COM Express Type 6 module designed to withstand the environmental rigors of deployment in transportation and other environments while also supporting the performance demands of edge AI use cases. These abilities are rooted in the onboard 11th generation Intel® Core i3, i5, or i7 processors, which contain two or four cores and either Intel® Iris® Xe or Intel® UHD Graphics with up to 96 execution units.

When paired with optimizations from the Intel® OpenVINO toolkit, COM Express module is extremely efficient at crunching numbers in AI vision applications. However, this level of performance can be a detriment to edge systems because it implies high power consumption and excess heat generation that could damage electronic components.

#SupplyChain #DigitalTransformation could cut costs and balance inventory by only sending shipments once transport vehicles have reached 100% capacity. @Avnet via @insightdottech

Game Changing Processor Platforms

Certain models of the host Intel® Core processors are designed to resolve these challenges.

“What is really a game changer in 11th gen Intel processors over previous generations is definitely the support for extended temperatures and 24/7 operating modes,” says Christian Engels, Product Marketing Manager at Avnet Embedded. “You can perform heavy duty applications on the CPUs for a long time, which lets you run these workloads in extreme environment conditions.”

Being part of the COM Express family of standards, the MSC C6C-TLU needs a companion carrier board that links the module to the larger computer vision system via application-specific I/O. Once built, this carrier board can support processor modules with the same interfaces for years to come.

Avnet Embedded is well-versed in designing and manufacturing carrier cards but can also integrate complete standards-based computer vision systems that give logistics managers their own intelligent eye at the edge.

AI Supply Chain Management Never Sleeps

The complexity of today’s global supply chains has made recovering from COVID-19 shutdowns an equally complex challenge that requires different solutions.

For instance, distributors are moving from just-in-time inventory models back to stockpiling merchandise as insurance against supply fluctuations. At the ports of Los Angeles and Long Beach, authorities are enlisting the expertise of logistics powerhouses like Walmart and Target to expand overnight operations until shipping container backlogs are cleared.

These more fluid, higher uptime logistics operations will require support from tools that are intelligent, reliable, and able to identify supply chain opportunities more quickly and efficiently than humans.

Lucky for us, AI-driven logistics never sleeps.

 

This article was edited by Georganne Benesch, Associate Content Director for insight.tech.