Hot AI Trends for 2021

A conversation with Ray Lo @OpenVINO

[podcast player]

2020 was an eventful year—and AI played a major role. Whether guarding against overcrowding or helping factories ramp up mask production, AI truly showed its value.

So what’s next for AI and its cousins, deep learning (DL) and machine learning (ML)? We put that question to Ray Lo, an OpenVINO evangelist at Intel. Join us for a lively discussion of the state of the industry and the big trends ahead in 2021. We explore:

  • Why AI applications like natural language processing (NLP) will be hot in 2021
  • How developers can strengthen their skill in AI, ML, and DL
  • How to create ethical AI applications
Apple Podcasts  Spotify  Google Podcasts  

Transcript

Ray Lo: I always find people are too ambitious about AI. That’s how I find that was a pitfall. I’m an engineering background. We have to be realistic about exactly what this can do and what it’s good at.

Kenton Williston: That was Ray Lo from Intel. And I’m your host, Kenton Williston, the editor-in-chief of insight dot tech. Every episode I talk to a different expert about the latest ideas and trends that are pushing IoT innovation forward. Today’s show is a look back at the ways AI changed in 2020, and a look forward to what’s ahead in 2021. There’s a lot to talk about, so let’s get to it!

So, Ray, I just want to welcome you to the show. Could you tell me a little bit about who you are and what you do at Intel?

Ray Lo: Great. Yes. Hi Kenton. So, my name is Raymond. I’m an Intel software evangelist for OpenVINO. So, OpenVINO stands for Open. VINO is visual inference. And then, NO means neural network optimizations. So, it’s a big name, but what it means is when you have a CPU, you want to run the fastest possible neural network on Intel, you run this tool called OpenVINO. And that’s what I do. I’ve been giving this news to many people at Intel.

Kenton Williston: Very cool. And how long have you been in this role?

Ray Lo: Pretty recent. I joined about… Let’s see. Hold on. I’m doing my finger math. Oh, four months ago. And now been giving talks at Intel and all that.

Kenton Williston:  Well, one of the first things I wanted to ask you given your background there is what exactly AI is. Put some context around that. We’ve been on the insight.tech program doing a lot of work around the OpenVINO platform. And its applications in everything from machine vision to predictive analytics. So, there’s a pretty broad scope of stuff that people think about when they say AI. And of course, there are related terms, deep learning and machine learning. And I think oftentimes all these things get conflated and it’s a little confusing as to which thing is which. So, you want to give us your primer on what in the world AI is and how it differs from these other ideas.

Ray Lo: Sure. Maybe I’ll put a one line about my background. So, from a perspective from my side. So, I did my computer science from Toronto, and then, I did my PhD there as well for computer engineering degree. So, my thinking about AI is that what AI stands for, artificial intelligence, right? So, we always think about there’s a way to emulating, simulating, just want to make a brain that behave like human, right? So, things like predicting things like object recognition and all that.

But what I always see people confuse is there’s a part called machine learning, there’s a part called deep learning. So, those three categories people always think about them in a mixed way. I always think AI is a big umbrella that cover many of that. And within that, you have machine learning. And within machine learning, you have something called the deep learning. It’s one category, like the neural network, where people… I will say more recently because of the computation power allow us to do that. So, it became a lot more popular recently because back then, when I was starting school about 15 years ago, when you’re doing this kind of math, it may take a year before the training was finished. But today, we talk about weeks, maybe days. And if you’re very smart about it, maybe in couple of hours, you can get some results done.

Kenton Williston: Yeah. It’s amazing how much progress has been made, which leads me to ask. This whole podcast I want to talk to you about what are the trends that you’ve been seeing in 2020. So, just kind of open-ended question for you. Beyond the amazing continuing increase in processing power, what do you think some of the biggest trends of the year were? Not only AI, but deep learning, machine learning, all the related areas.

Ray Lo: Right. Because in the last year, I’ll say you hear a lot of podcasting about computer vision side, which is my background too. But I start to see the trend of kind of beyond vision. So, we will be seeing application, like NLP, natural language processing. It’s matured a lot recently. For example, one trend I saw were something called a BERT, was a new I would say a framework [inaudible] from people created for doing natural language processing. And the result is astonishing. So, it can actually… What they can do is they optimize and fine tune it for applications or tasks called SQuAD. It’s Stanford question and answering database. So, they can literally answer questions better than humans. So, if today I take the SAT test, I don’t think I can win. It’s things like that.

So, it’s kind of like, okay, there is certain tasks that now today machine can do so fast and so much better than human. So, that’s one trend I saw is in call center, especially this year is such a crazy year, we’ve seen a lot of disasters. Like, bad things happen. So, one trend were call center are now automated a lot better than before. So, they have machine learning behind it to answer the call, and then, translating what you said, direct you into the right system, or sometimes even answer questions for you.

Kenton Williston: Yeah. For sure. Happily I haven’t been in an emergency situation or anything like that where I needed to get a quick response from a call center, but even on my own daily experience, I’ve got an iPhone and Apple Watch and all the rest. When Siri first came out, it was just really a joke. You could ask it to set a timer maybe. And maybe it would get that right, but it was pretty bad. And now, it is gotten to be very perceptive.

Like, the other day, I happened to be reading my daughter a book that talked about the design called the fleur de lis. And I tried to make a drawing of it to show her what it was. I was like, “Well, this looks terrible. Let me just see if Siri could help me out.” So, I just raised my wrist to my mouth, and asked Siri to show me a fleur de lis, and sure enough there is an image of a fleur de lis on my watch. It’s gotten to be very good at answering broad questions. Same for all the rest. Alexa and all the rest of those too. They’re much, much better than they used to be, even just, say, a year ago.

Ray Lo: Exactly. I even forget how to set an alarm sometimes. I have to really like Google Alexa. I just tell a story read and getting into the menu and all that. So, that’s a lot of task, like what we talked about, become a lot more natural to human. And behind the scenes, you can actually see all the data center crunching all this data for us. And then, doing all this heavy lifting. And that’s why I really find is really cool in this year.

Kenton Williston: Yeah. For sure. Of course, you mentioned the difficulty we’ve had this year and everything that’s happened around the pandemic, of course, has been really dominated not just the tech industry, but what’s happening in the world at large. But I think as difficult as the situation has been, there’s also a lot to be excited about in terms of how all these smarter technologies helped the world respond to COVID.

Ray Lo: That’s correct. I actually did a study… I was at Google before Intel, so I was looking at some of the case studies they did. How they scale the call center. So, that was really lifesaving because with all those emergencies, they take about millions of call in a day. When I think about where do you get millions of people, right? Especially people at that level of stress. They want to get simple answer. And those are really I think… It’s really the future. We always think about oh we worry about the jobs, but those are the jobs that’s not even we can be able to scale to. And then, sometimes essential for us. So, I find that is something very new to us.

Kenton Williston: Yeah. For sure. I think you’re making a really good point there about the longstanding concerns that the robots are going to come and take all of our jobs, which there is some merit to that. Certainly automation has changed the job landscape broadly speaking, but I think AI is really poised to do jobs that just weren’t possible before. And also free people up from the really ugly bad jobs to do things that are more pleasant. So, one example that, again touches on the pandemic situation we’ve been in, are the many different kinds of machine vision applications to do things like scan crowds for fevers or…

One nice one that I was just reading about is a simple digital display that’s paired with a vision system to tell you, “Hey, there’s X number of people in the store.” So, it’s a really non-invasive way, non-confrontational way of saying should you feel safe entering the store or not, is this meeting with regulations to enter the store or not. And that’s a job that would be pretty unpleasant to do as a human being.

Ray Lo: Right. I think I see… I work with partners a lot in Intel. For example, I talk with ADLINK. So, they released tools to gather in logistic house. So, it’s Christmas time guys, right? We’re getting a lot of gifts. Just millions of millions, maybe billions of packages sending around the world. And then, they scan that, double check that for you, before you receive them. And I think just reducing maybe 0.1% of the error rate, just 0.1 maybe, it’s such a huge deal. You can imagine all the gas you waste, all the energy you waste, to deliver something raw. Having those checks in place is such a great thing that’s happening in the industry.

And so, is inspections, right? Safety, I saw, like… for example, it will check the tool for you. If see anything defected, like for car for example, or inspect the wheels. Those are lifesaving for me, and I find those a type of job that even if you give to human, you don’t want to afford that 0.1% error.

Kenton Williston: Yeah. Absolutely. And just goes back to your point you’re making about how in many applications, machines are doing much better than a human could ever hope to do. It’s not even a question of are you replacing a human? It’s just something a human just could not do absolutely.

I’m curious though. We’ve talked about a couple of key areas. I think one of the key areas in 2020 for sure was machine vision. There was a lot going on there, whether it was in an industrial setting like you were describing doing inspection of packages and parts and whatnot, or in the more public sphere to do things like tell if people are wearing masks or not. And of course, you talked a little bit about language processing, I think, has also been really, really important. What other areas have you seen some important movement in?

Ray Lo: For AI, right? I want to give some suspenses because I see a lot of things in the industry. I want to talk about AR as well coming up. Something that I personally work on. Before Intel, I was a CTO for a company building augmented reality headset. So, more recently, I think you may have seen from various companies like Facebook. They’re releasing augmented reality headset. But behind the scenes we start to realize a lot of machine learning will get into a place like recognizing places, landmarks. A lot of decision that will make for you, it’s not going to be done by human behind the scenes to say, “Okay. Trigger this. Show you this.” So, they start to look into a lot of those efforts that I have seen. Quite amazing.

For example, six, seven years ago when I did a SLAM tracking… And once you have the landmark, I always have to question, “Now what?”

Kenton Williston: Right. Now what? Right.

Ray Lo: “I have a landmark. Now what?” Right? So often the after delayer is how do you take this data, and then, generalize it or create methodologies so that people can utilize them? Like, one way I’ve seen is, okay, now you have a scene. Now you recognize the chair. You recognize the table. And you make a scene of information that you can use for content. So, I had a application at one point, they generated a workspace. When you see a desk, a chair, it automatically generate a virtual screen. And people recognize everything, like a setup, as if it’s in the real world. I find that super cool because it’s like the sci-fi movie, but I work on research for many years, and that’s fascinating.

Kenton Williston: Absolutely. And it strikes me that that’s also a really great example of the different kinds of machine intelligence because there is, I’m sure, elements of deep learning and machine learning in terms of recognizing the scene. And then, some AI to decide, “Well, what should I do now that I recognize the machine?” And I think really illustrates how all these different concepts play together.

Ray Lo: Mm-hmm. Yeah, so if ask me, I never say it’s one application. But I see a set of tools that work together, turning to a new experience for human. And it’s like today. You’re going to shopping, right? You often may pull up your phone to look for the barcode and look for the discount. And et cetera. Et cetera. But think about we can automate the entire process. You just walk in the store, pick up the best thing, the coupon automatically applied. You just focus on the shopping instead of trying to go through that painful experience. And then, that’s what we’ve been seeing the retails. A lot of automations are happening and behind the scenes it’s real machine learning driven. Some of them of course what we’ve talked about the tracking and helping the people.

Kenton Williston: And absolutely. I did a podcast series recently talking about retail. And there’s so many interesting examples there. One that really made me laugh was there is an application where they used the RFID to do some analysis of theft that was happening in the store. And they discovered that one of their biggest sources of loss was actually people taking products from one floor, and then, going up to another floor and saying, “Oh, I need to return this. And I don’t happen to have the receipt.” Et cetera. So, there was theft that was happening without anything actually leaving the facilities. So, lots of interesting applications, for sure.

And that makes me think, with all these concepts becoming so prevalent across I think pretty much every industry, would you say that for developers getting skilled in machine learning and AI are becoming really a requirement?

Ray Lo: I will say… Okay. I felt like today when you think about using machine learning, AI, it’s like back then when I was doing math on top of calculus and linear algebra. It’s like fundamental that if you don’t use those tools, you may be missing out for a lot of potential applications. Of course, you don’t have to use it for everything. For example, you just want to print a “Hello World” on the screen, you don’t have to take up your machine learning textbook tonight. Okay, it’s not design for. It’s just doing something simple, right? But I see that as a momentum that I see a lot.

I did some research on the trend on the machine learning. I think it’s published by Stanford too. So, in the last 10 years, the growth was close to exponential. So, the number of conferences with attendees, like they double every year. So is the publications in Europe, China, America and the patents that we file related to machine learning and deep learning. So, this is something I find… just like back then when we talk about internet and this is pretty much happening again. It’s something that if your phone doesn’t have a camera or internet, that’s like it’s not working. That’s how I feel now if you try to get into this field today without having some fundamental, it may block you from your creativity.

Kenton Williston: Yeah. And that makes sense to me. But I think on the other side, when I think someone who’s new to this field, starts looking at the diagrams of convolutional neural networks and things like that, it can be a little overwhelming.

Ray Lo: Hmm. Exactly why we have OpenVINO. That’s exactly what we… I’m not trying to sell this, but… Well, that’s why we have OpenVINO to encapsulate a lot of the automization steps, which I don’t think you want to get a whole PhD on that problem. And it’s really hard. Just getting the quantization problem right is very difficult. So, that’s why in Intel, OpenVINO, we have a lot of engineers just focus on those big problems. Like, how to get the… exactly what I talked about, performance-wise. Or just getting the tool together so that you don’t have to learn everything, but you have to know it of course fundamentally what mimetic is, what does it do.

But for the deployment perspective, for the development perspective, not engineering perspective. When I always think about development is like copy and paste code, make something quick and easy first, and prove your concept. Like, a prototype. Now today we had a couple of hackathons. In a week they built something I spent six years on in my PhD. I was like, “Oh, that’s not fair.” But that’s the reality. That’s the reality, right? That’s what’s happening.

Kenton Williston: Well, that just seems to me broadly speaking how AI platforms and deep learning platforms are evolving in general. Like you said, even just a couple of years ago, to develop some of these applications would be a huge amount of work. And now, there’s so many platforms that offer pre-packaged models or even things like… You were talking about using SLAM. You get a little developer kit that has a mobile robot and ROS operating system and the SLAM already built into it. So, it’s giving you a tremendously advanced foundation to start from. And you still need to do the work of course to implement whatever it is you’re doing for your specific application, but you don’t have to get bogged down in all of the fundamentals as it were.

Ray Lo: Exactly. And that’s why I believe too SLAM took me millions of dollars to build. It was no joke. My journey where I have to find a professor. Then do collaboration. Then from the collaboration I have to sign a contract. From the contract I get a source code, I have to maintain a source code. Then from the source code, I have to debug. And then, we went back and forth for half a year. That was the reality I was facing. But today, you download a package, it’s been tested, calibrated. Hardware, software, all working together. And then, that’s the new reality we’re facing.

Kenton Williston: Yeah. Exactly. Much better.

Ray Lo: Much better. I’m so happy.

Kenton Williston: So, if you are a developer looking to get into this field, what would you suggest is a way to get started?

Ray Lo: I would definitely recommend people start looking at existing tools because we spend a lot of time and effort… I’m not going to say only OpenVINO, but TensorFlow, all the open tool in the market. And get familiar with the framework and then the understanding of the mathematics. I still think mathematic wise you have to go for it. Even if you don’t have the math background, there’s a lot of good lessons from Coursera. Even OpenVINO have open courses. They can take and get that understanding. Once you have that understanding, now you see the possibility.

Then you get into the nitty gritty details. So, we have a lot of demo code you can try. Trial the demos. I love demos because they open up the imaginations, right? When I work with a lot of developers, surprisingly a lot of them from India, they are students. They come up with new ideas that I never even thought about because… and I ask like, “How do you think about those?” “Oh, I remember I tried this demo. And I tried this demo. I tried this demo. If I combine all this demo together, I get a new demo.” I was like, “Wow. That reminds me of Legos.” Right?

Kenton Williston: Yeah.

Ray Lo: Yeah. So, having that understanding, having that flexibility, having things working in a modularized way, and putting them together is the new trend. So, I think that’s where I think a lot of people should focus on the beginning. Just don’t get bogged down on just one technical detail now, but instead think bigger. See if you can solve the world problem. And once people understand it, love it, get a team together, now the resource will come to you because now you are proving your point. So, I think it’s much better than before. You got to be in a research for four years on particular small problem, and then, that’s it. That’s how I see it differently.

Kenton Williston: Yeah. I think one question that leads me to is you’re kind of painting a picture here of almost a blue sky environment where you can just really be creative and put all kinds of new ideas together in ways that people haven’t thought of. But obviously anything you’re doing has to fit within the budgetary constraints, which not just the dollars, but you got power constraints, or you got some kind of rugged environment where you might have a different thermal constraint or whatever. So, where do you see the state of the art in hardware now? I’m wondering in particular if there are advances that the broad developer audience might not know about that would raise the ceiling on what’s possible inside of these constraints?

Ray Lo: That’s a very fundamental problem when I work for my partners, right? So, once every use case. I think now I will say sky or the space is the limit because we had one success story where someone put the Movidius VPU on a satellite. So, that has a much, much harsher requirement than anything else because beyond just thermal, they have to think about radiations. They’re going up to the space. So, things like that I think when we are building product today, today we have a lot more flexibilities to. Back then, you’re constrained to a extremely power hungry GPU or maybe at that time may not be a powerful enough CPU that’s not optimized for the code. Now, it’s a lot better and a lot better. Or you will be really stuck on this extremely low power, low performance, like a Raspberry Pi at one point.

But today I think we have a lot of our hardware accelerator platforms available. Like, just recently OpenCV released a project called O-A-K, OAK. And now you have a camera with a billion Intel hardware accelerator process in it. And then, it just changed the landscape how we think about processing. We always think about processing as a device, a processor, maybe an extra processor like a GPU or maybe something on top. Then you have a cable that connect everything. But with those kinds of newer approach right now is everything in one chip. Like, you have the Intel chip next to the image processor. And then, you may even have a slightly underpowered CPU there just to do some easy crunching. And you can connect that to a host to do even heavy lifting. And that’s how I see the architecturally hardware is converging a little bit. Back then was a duct tape, I call it duct tape. Just you have something on a USB cable.

Kenton Williston: Yes.

Ray Lo: USB 3.0 cable. It was a horrible thing to me. Latency was crazy hard. You have so many issues. Powering. So, today you see a lot more condensed into one single element. And then, I see that as one of the next things.

Kenton Williston: Yeah. And I think it’s fair to say that basically any hardware you look at these days, it’s starting to acquire some AI capabilities. Like, the most recently released Intel Atom processors, which you wouldn’t really think of as being super high-performance processors, you have some AI acceleration built into them. So, even at that level, there’s a lot you can do.

Ray Lo: Exactly. That’s the one that went on the satellite.

Kenton Williston: Ah okay.

Ray Lo: There we go. You picked on the right one. Given all these choices and platforms, now even on a space program, people are able to think, “Okay. Now I have one more other power available. What can I do?” Because all they have is a solar panel. But now they can do so much more because with that project… Now, the problem is not just power, right? They have bandwidth. It takes so much time to transmit one image, so every image is so important. But they can crunch couple of images because they have enough power from the sun. So, then they actually process the image, make sure it’s not garbage image, it’s nice, it’s like satellite image, right? When you take picture of a cloud, what do you see? Cloud, right? You want to see houses. You want to see landscape.

Kenton Williston: Yeah.

Ray Lo: Now because of that processing, they’ve saved… I don’t remember the exact number, but that changed the whole dynamic about the whole efficiency there. And that’s I think that’s the innovation that people are thinking now. And just like re-adjusting the problem statement.

Kenton Williston: Yeah. Exactly. I’m glad you said that because that was exactly what I was thinking that it’s not just, “Oh, you can do all these new things” but it’s a matter of you can come at the problem from a totally different angle than you would have before. So, it’s good to rethink your architecture. And very simplistic example of this would just be the way that all this machine learning, deep learning has very often been split up into train it in a big power-hungry data center or cloud or whatever. And then, deploy the inferencing at the edge on something really, really lightweight. Put the right processing, the right smarts, in the right place.

And to your point, all the other things you can do too. What can you do to rethink where the data flows? So, maybe you do processing in a location that previously would have just been a transmitter of data, et cetera.

Ray Lo: Exactly. And people still get confused between the training and deployment. They always think AI must be extremely power hungry. Yes, the training phase because you’re trying to teach the neural network, but once you have the network ready, the deployment, that’s I think we have to really think twice. The deployment is a different problem than the training. And of course, there’s different type of machine learning problems that may require real-time training. But for most of the stuff with detection, like what we’ve talked about, detecting the cloud, once you train it, the neural network will actually be able to detect those very quickly. And then, we will be able to deploy them very differently.

Kenton Williston:  I do think it’s useful to explore where the biggest challenges lie in AI and machine learning. What some of the common pitfalls are and what developers can do to avoid those?

Ray Lo: Yeah. I always find people are too ambitious about AI. That’s how I find that was a pitfall. I’m an engineering background. We have to be realistic about exactly what this can do and what it’s good at. So, I did a challenge about doing image classifications. And I gave it to many candidates. I said, “Okay. Run this code. Put on your own image. And see what it can do.” Even as amazing at 90 something percent, 80% of accuracy, but that 20% of error is hilarious. So, if you think about deploying a tool for use cases, you have to really understand the use case, and align with your expectation on accuracy. Is 80% acceptable? A lot of times it’s a no, right? And it’s amazing, but it’s a no. It’s a big no. And people have to really learn that in an early time before they deploy.

And why is it funny is we did that challenge. People put a Tesla on it. It’s really funny. So, the Tesla is not… So, the Cybertruck is not part of the database. It came up with the answer it’s a jeep plus a beach wagon. I was like, “That’s correct, but I don’t think the marketing team will appreciate that.” So, think about things like that. You have to really learn what you’re doing and make sure they align with your use case.

Kenton Williston: Yeah. Absolutely. There’s some instances I’ve seen some pretty funny examples of AI trying to classify whether what it was looking at was a muffin or a dog. It looks so similar.

Ray Lo: Exactly.

Kenton Williston: And that’s pretty funny. But of course, if you’re doing something like…

Ray Lo: Medical.

Kenton Williston:… when you predict when a very expensive machine is going to fail or anything that has the kind of ethical implications, all of a sudden, it’s much less funny. You need to be very, very thoughtful. And I think that there is some big lessons learned this year about being ethical with AI. I think conversations that really needed to happen.

Ray Lo: In Intel, we actually formed a group just on that topic. I think it’s extremely important to understand what you do, does it hurt people? Does it have any damages? It’s ethical, right? That term is such an important thing because it’s like you have great power… What’s it called? When you do Sudo on a Linux, right? Great power come we think great responsibility. Yes, sounds a little bit old, but it’s happening. So, that’s something I feel we have to all look very carefully into.

Especially with medical. Think about this. You’re doing diagnosis, right? Is that 1% good enough? Is it ethical to say, “I can accept a 1% error”? Is it going to do something harmful with the people? Those kind of have to go through a lot of rigid testing approval and making sure things are right.

Kenton Williston: For sure. For sure. Going back to an earlier point you made, there’s some things that machines can do now far better than humans, but there are definitely times when you really need a human in the loop.

Ray Lo: Mm-hmm.

Kenton Williston: And it’s really even in the training part. So, I just mentioned, for example, monitoring expensive machinery. It would be unwise for a developer who’s not familiar with whatever equipment this is to think that they could just go out and collect some data and interpret it. You really need the human being who’s been operating that machine to help you understand what the data really means.

Ray Lo: Mm-hmm. This is very important because back then we have data biases, and then, create problem down all the way to recognizing… Become racist, becomes manipulative. Bad things can happen to the system when it’s not really carefully reviewed and monitored.

That’s one thing I think we got to be really careful. And then, I think as long as everyone have the good heart, it’ll be okay.

Kenton Williston: Right. Absolutely. So, kind of zooming back out to the big picture. Wanted to again recap what we’ve seen happen in 2020. So far we’ve talked about things that have happened in terms of the advancement of platforms, on the hardware side, and on the development of software, development side. The ways people are coming at problems in different ways. How important this has all been to the pandemic response. Any other big picture trends that you’re keeping your eye on?

Ray Lo: It’s open source. I think that’s one thing we always undertook. It’s the whole OpenVINO effort, all of the TensorFlow effort, all of AI effort, that are open source. So, it’s something that is not very common back then with a lot of the corporate I worked with in the past. So, oftentimes you may have a solution. One off, you have to pay for license fee. Or you don’t even see anything. And there’s no way you can adopt and change with the rest of the community.

So, the open source and community, and then, that’s why I talk about OpenVINO as open. I find is very empowering because I’ve seen a lot of use cases that are done by the community that I’d never seen. Like, for example, OpenCV is our partner. They have their own open community. And then, within the community, they take on both tools. And then, they create new tools. And that’s one thing that’s happening in the next two to three years. We will see those new tools that open source are mature and are getting to the point that will be the new standard. Open standard, open source for AI is the new big thing for me.

Kenton Williston: So, what do you think that will enable? Is it just a matter of increasing the ability to come up with these creative ideas and put things together in a new way? Or is there something more beyond that do you foresee?

Ray Lo: I will see it’s like two or three phases, right? It’s a Linux in the beginning will be like, “Oh, it’s a small community.” But eventually become a standard for all server that we’re running today. And then, become a thing, right? Becomes the gold standard. And I see those will happen in many of those. It will just change the way we approach things. And because of that openness, now advancement is in exponential speed because all those blockers are going. And that’s why I very care and interested in… It’s literally viral. It’s one to two, two to three, two to five. So much faster than before.

Kenton Williston: Yeah. I agree with that because again, thinking about how you want to be focusing on innovative ways to tackle a problem and not the basics of the technology. As more and more gets contributed to this community, and again, as a very simplistic example. Just all of the pre-trained models that are now out there. Boy, that gives you so much of a faster start and makes it so much easier to focus on whatever is unique about what you’re doing.

Ray Lo: That’s correct. Especially with the pre-trained model I think is a big deal because not everyone have the powerful GPU, can train everything from scratch. A lot of people interested in the outcome. Like, the use cases. For example, the BERT, I don’t have the database, I don’t have all those, but I can turn that into a cooking recipe, which I built for demo. And then, now instead of reading the recipe, you can ask questions about what the recipe can do. Like, how many eggs do you need, things like that. And you can run that in real time. And that’s very different because before when I think about that problem, I think, “Oh dear. I got stuck collecting all the recipe in the world. I got to think about a language model. I got to think about who I got to hire. And I didn’t even have a dollar in my bank yet.” So, that’s a huge difference.

Kenton Williston: So, just to make sure I hadn’t missed anything important yet, could you tell me what BERT is? That B-E-R-T.

Ray Lo: It’s a language model that’s published by Google. It stands for bidirectional encoder representation. And what this will do is… So, back then when we did machine learning, there’s many ways to do this. This one is published by Google that has one feature that we all love that it’s called fine tuning. What it means by fine tuning is when you do natural language processing to understand what the language means and all that, the effort oftentimes it’s very much like one task it can do. Like, find out where noun. Where’s the verb. Things like that. But this one, you can fine tune to do things specifically like what I talked about question and answering. So, it will be able to do that, but without retraining the entire model.

So, you can think of it as like a new processing model that Google came up with a lot of researchers together. And it’s something, really, I would say popular now because it’s a new gold standard. Because of that now you do Google search and all that, you get much better accuracy. So, you wonder why. “Why is this so good now?” It’s actually behind the scenes one of the models they use.

Kenton Williston: Got it. Makes sense. So, before we go, I want to get your thoughts on the coming year. So, this is actually going to publish in January. And this will go live in January. So, we’ll take a little risk here and see by the time folks are listening to this, if any of our predictions are maybe even coming to pass. But what are some of the main trends you foresee happening in this domain in 2021?

Ray Lo: Mm-hmm. So, I got to summarize, I think NLP will be a big thing in the next couple of years. Those will change the way we interact with devices. We saw it in the early time, but now time where you get call center, things are happening.

A deployment of IoT will kick in very soon. You’ll see a lot of… all the warehouse, all those automations, you’ll see machine learning in every bit of our industry.

And last and not least, I think the growing trend of augmented reality and virtual reality. We know we talk about a lot. It seems like it’s the hype back then, but today, when I look at the technology maturity, the integration of AI and AR and VR will happen because I crave a good content all the time from virtual reality and augmented reality headset. And I think once we put those elements we talk about, recognizing things, how it can create relevant things about your life, your surroundings, it’ll be a killer app for many things we’re doing today.

Kenton Williston: Nice. Well, with that, let me just say thank you so much for joining us today. Really enjoyed this conversation.

Ray Lo: Thank you. And I have as well.

Kenton Williston: And thanks to our listeners for joining us. If you enjoyed this podcast, check out insight.tech for more innovative IoT ideas. This has been the IoT Chat podcast. We’ll be back next time with more ideas from industry leaders at the forefront of IoT design.

Q&A: Keys to SI Success in 2021

The challenges of 2020 have led to huge changes in the way we work. But systems integrators have a lot to look forward to in 2021. The tech industry’s rapid response to the global pandemic means that SIs can now offer their customers a wide array of return-to work solutions.

To learn more about the opportunities, we spoke with Tom Digsby of global distributor Tech Data, which has created an award-winning tool to get SI solutions to market. He revealed his tips for success in the year ahead.

(To listen to the full interview, check out our podcast on secrets for SI success.)

Looking Towards a Post-COVID World

Kenton Williston: Tom, what’s your role at Tech Data?

Tom Digsby: I manage a group of vertical consultants and technical consultants. What our team does is help our partners—or resellers—understand IoT in a vertical context. Our focus is on healthcare, it’s on smart cities, industrial manufacturing, and also retail and commercial.

Tech Data is a solutions aggregator, and we have a lot of value that’s more than just ordering from us. We help partners take a solution to market. We have pre-packaged solutions that you can take to the market today, and we also have a vast ecosystem of partners.

Kenton Williston: During the pandemic, systems integrators, regardless of what market they’re serving, are being asked to very rapidly deliver all kinds of new solutions—fever checking, contact tracing, mask compliance. How has this been impacting the business that systems integrators have, and where do you see things trending in 2021?

Tom Digsby: We have seen end users place their purchasing projects on hold because of the pandemic, and to reserve and preserve capital that they may need. The systems integrators have really had to shift their focus, and we’ve helped do some education around COVID-19 and the return to work. A lot of buildings are empty now, and there have to be some safe measures put in place, and we have some solutions. We have about 20 COVID return-to-work solutions.

I’ll give you a few examples. Temperature pre-screening, right? Someone comes into the building, you can grab their temperature and make sure they’re good. Telehealth: having a virtual conference with your doctor. Air quality monitoring, social distancing, alerting, digital signage. Those kinds of solutions make people feel safe. And when people feel safe, they can return to the normalcy that we knew before the pandemic.

We had a partner approach us and say, “We need to look at the office as kind of like a hotel for desks. I want to make sure that we’re cleaning the desks. I want to make sure that there’s not more than six people in this one area, and there’s not more than X number of people on the floor.” That’s an example of returning to work, and some of the things that can be put into place technology-wise.

The Changing Role of the Systems Integrator

Kenton Williston: What about the role that systems integrators play in a solutions landscape? Do you see the ecosystem evolving in any way, either because of the response to the pandemic or for other reasons?

Tom Digsby: We at Tech Data take our solutions aggregator role very seriously. Our expertise is vetting the vendors—understanding the solutions that can be aggregated and brought to market. As we all know, no single vendor OEM or partner can deliver that everything that one end user needs, right? We have an extensive ecosystem partnership with a lot of different types of skilled partners.

When you’re looking at the solution, it has to have a business outcome. If the solution doesn’t have a business outcome it’s a science project, and no one’s buying a science project. It has to have the business outcome, and then helping you drive the business value through the analytics side of the house.

Kenton Williston: You’ve just used the word “aggregator.” How does that differ from a distributor?

Tom Digsby: So, a distributor has relationships with a lot of different vendors, right? And their primary role is to buy a product and make a little bit of margin, and ship the product. We have a solutions specialty practice. How do we educate our partners so that they can sell solutions, rather than just ordering a point product?

A lot of times when we get a call from a partner, they’ll say, “I need 16 tablets or 160 tablets.” And you’ll say, “What are you going to use that for?” “Oh, we have a blah, blah, blah, solution that we’re going to take to market.” And you start digging into it. And it becomes part of a bill of materials that someone’s needing to fulfill a technology need—a business-outcome need.

So, we dug a little bit more and what it was, was they were wanting the tablets to be able to access the information for the manufacturing floor. So, we dug a little bit more and discovered, “Hey, you need some sensors, you need some more information.”

We basically cobbled together a solution for them as part of the business outcome: “Oh, you need to centralize this information. You need to be able to deliver it in this way. And you need to be able to see the data on those 260 screens that you just wanted to order.”

They were so appreciative, and that that’s what spawned the Practice Builder. So, our value is teaching our partners how to solution sell. What is that business value for the end user? Not just buying product and shipping it.

We have a business tool that we use, and what that simulator does is it sits down and understands the pricing, the cost of goods, your additive services, your third-party integration, your OEM price from all the different vendors that you’re aggregating. It’s one place to look at the whole picture of, “What does this cost, and how much margin can I make out of it?” The tool and the process work together, hand in hand.

Kenton Williston: It sounds like a big part of what you’re doing is bringing to bear technologies and expertise from many different sources. So a systems integrator can focus on their own specialty, but be able to leverage best-in-class solutions for all the areas where they’re not experts, and don’t want to be experts.

Tom Digsby: I think that’s a great summary. Our partners are very appreciative of the value, and all we ask of them in exchange is, “Hey, we’re going to teach you this methodology, and all we ask you to do is source the equipment from us—the software, the hardware, the things that are needed to put the solutions together.”

Customizing Solutions for Systems Integrators

Kenton Williston: How do you work with systems integrators to tailor these technologies for their end customers?

Tom Digsby: Let me start out by going through a little bit of the Solutions Factory process. As we bring a solution—what we think is a great solution package—together, we bring it through our Solutions Factory process, and that’s where we vet the vertical industry, the aggregation of the technology. We make sure that the business outcome is there. Has it been deployed? What’s the ROI?

Kenton Williston: Do you also help systems integrators find each other, and other kinds of service providers to fill in wherever they don’t have the right expertise?

Tom Digsby: Oh, absolutely. When I talk to people about our different types of partners, or the skills of different partners, I often draw out three circles. One on the left, one on the middle, and one on the right.

The one on the left I talk about as implementation and assessments. So, if you need to go out and assess an environment for where should the cameras be placed? How many cameras need to be placed? How far apart are the cameras? How are you going to aggregate that camera data? How many gateways do you need? Do you need switches? What kind of equipment is in place today? They may have a vendor preference that we need to take into consideration when we’re looking at all that.

Then the middle circle is the resellers who say, “Hey, I have an opportunity. I’m really good at creating demand. I can get face-to-face with a customer, and I need things to sell.” Those are the folks who are really good at selling and identifying opportunities, and then matching up the technology with what it is the client needs.

And then on the right-hand side, the third circle, is all about that business outcome. What is it that we need to capture? How is it that we need to capture it? It could be dashboards. It could be video feeds. It could be learning from the video itself and doing some AI interpretation of it. It could be machine learning.

Then we can cross-match. So, if an organization is looking for a skill set in any one of those three that they don’t have, we have a vast ecosystem and contracts with the partners that can deliver those kinds of services. It’s really just a matter of a little bit of speed dating, and introducing them.

Accelerating Digital Transformation

Kenton Williston: How you see the concept of digital transformation factoring in? There’s been a lot of talk about the pandemic accelerating digital transformation.

Tom Digsby: Digital transformation is a multi-step process. And when you’re looking at improving the ability to talk through the equipment, or learn from the equipment, or get the data from the equipment and then be able to autonomically monitor the plant efficiencies in a manufacturing environment, for example—once you have that, all kinds of things open up.

When you have that base level of automation you can gain efficiencies, but, more importantly, you can also create revenue growth. Meaning, if you have certain machine data and you’ve gathered it over time, now that you’ve transformed your environment you can actually monetize some of that data and put it into data sets. And you can actually offer that as a different revenue stream for the same kind of industry that the partner, the end user, is in.

Kenton Williston: What do you see as the key to succeeding in this environment?

Tom Digsby: One of the things we really home in on is what we call, “What is your killer feature? Why would I buy it from you versus partner X down the street?” So, having that differentiation: if you’ve got 16 years of manufacturing experience, people want to know that. And we capture that, and we hone it even to a finer point in the Practice Builder. The Practice Builder takes out the guesswork.

What we’re doing is looking at the repeatable solutions, because no one wants a one-off solution. You want to be able as a reseller or a systems integrator to say, “Hey, I could sell at least 80% of this over and over and over, right?” That’s what we call a repeatable solution.

The Future of System Integrators

Kenton Williston: What is Tech Data doing to continue improving its value proposition?

Tom Digsby: We’re always looking at our role as an IoT-solutions aggregator by gaining insights from vendors like Intel and the suppliers that we buy from. We look to strengthen our knowledge. We were having a knowledge transfer the other day about Edge processing and what the software from Intel looks like. And OpenVINO was one of our conversations.

We’re working with Intel to make sure that we’re identifying the solutions, and we’re mapping that with the problems and the business outcomes from the catalog of IoT solutions that we have, so that they can leverage the technology and our expertise and can really go to market. We support our partners in that way, and they appreciate our value.

I think if you bring us a solution, we can work with you. Just last week we had a partner bring a solution to us that revolved around SAP environment. And I was like, “Oh yeah, we can absolutely do the same kind of methodology and same Practice Builder.”

 If you have a solution that you want to bring to market, and it has distinct business value, and someone will actually buy it, and you’ve implemented it, or need to take it to market in a repeatable fashion, we’ll work with you.

Q&A: Elkhart Lake and Tiger Lake Revealed

Industrial environments demand tech that is fast, secure, and resilient. The Intel Atom® x6000E series and 11th Gen Intel® Core processors—formerly known as Elkhart Lake and Tiger Lake, respectively—were designed specifically to hit this trifecta.

To learn all about the new chips, we spoke with Christian Eder from Congatec, a leading supplier of embedded computer modules. He explained the importance of capturing data when it occurs, the beauty of hardware consolidation, and the many target applications that could benefit from these new CPUs.

(To listen to the full interview, check out our podcast on Elkhart Lake and Tiger Lake.)

New CPUs a Perfect Fit for IoT

Kenton Williston: Can you tell me a little bit about what you do, Christian?

Christian Eder: I’m the Director of Marketing for Congatec, a company which is dedicated to embedded computer technologies, mainly on computer modules.

Kenton Williston: From your point of view, what’s so exciting about the new Elkhart Lake series of processors? What makes them different from what we’ve seen before?

Christian Eder: These processors are a perfect fit for computer modules; also for the single-board computers we make. Of course, with the smaller structures of 10 nanometers, it’s got quite an increased compute ability. And although power consumption is quite low, it’s always shrinking. So we get more performance in the same power envelope.

We have four CPU cores, which is good for running things in parallel—multi-threading things. Especially the graphics, with up to 32 GPU cores. So it’s going to be a significant help also towards AI—because GPUs can be used for more than just graphics.

And for industrial use, maybe the biggest step here are the real-time capabilities—Time Coordinated Computing. TCC is implemented here in the CPUs, which is really ideal for rugged, industrial, motion-control hardware.

And all of this together is a great platform when it comes to real-time operating systems. We can install multiple operating systems—even real-time operating systems—and run those in parallel. So, bringing multiple platforms together on a small, low-power Atom platform. All of this becomes possible.

Kenton Williston: The list of features on this thing, it’s like—this is clearly something meant for IoT applications. Would you agree with that? This is a pretty different approach for Intel in terms of how custom-tailored this is for embedded.

Christian Eder: Absolutely. And the whole feature set is really perfect for industrial users. What is always tops—when you think about industrial—are the extended temperature ranges. The temperature ranges are for industrial use, for 24/7 operation. So that’s the big difference, even if you don’t see it in the first few features. But the industrial-use conditions are challenging. And this is clearly addressed by this new platform.

Real-Time Capabilities Enable Hardware Consolidation

Kenton Williston: One of the other features that really stands out to me is all of the capabilities around real time. How do you see that feature set being utilized?

Christian Eder: We have tons of applications here when it comes to words, motion control, robot controls—that’s always real-time critical. But it’s not limited to this use case. You have to capture the data when it occurs. And there is no second chance to do it; if you miss the sample, it’s lost. That’s critical for a lot of medical applications. And in the past, there was a lot of dedicated hardware around to make real-time capabilities. Now, it’s all built in, and you can do quite a lot.

We can bring multiple operating systems together. We can have the real-time tasks installed on a single core, which takes care of, let’s say, the emotions of a robot. But a robot nowadays also needs to have eyes, if we have cameras attached to do some AI analytics with the pictures it’s capturing. And this can be installed in parallel on the other cores. Of course, we’re still talking about an Atom. So don’t expect tremendous frame rates. But there are much smaller applications where this application’s performance level is more than enough.

Kenton Williston: None of these tasks is necessarily all that heinously difficult, but being able to do them all on one platform is quite advantageous, in terms of having a system that has lower costs, lower size, lower power, and presumably could even be a little bit easier to design. Would you agree with that?

Christian Eder: Yes. That’s the whole idea of this hardware consolidation. In the past, it used to be three different boxes being wired up with some cables and some Ethernet switches and whatnot. Now, you can bring all sorts of applications together in one tiny, low-power box. And of course it’s a tremendous savings in hardware costs. It’s just one system, much easier maintenance, and everything together on one platform. So, I totally agree that this makes sense. And we will see more and more of these applications.

Upgraded GPU Powers AI Algorithms

Kenton Williston: But it seems like a lot of the other capabilities that surround the CPU are really what make this platform interesting. And one of them that you mentioned is that the GPU performance has really tremendously improved. I believe you can take advantage of that with the OpenVINO platform. Is that right?

Christian Eder: Absolutely. This really gives a boost and allows it to operate AI algorithms at a reasonably good speed. I have to say it always depends on the details and on the complexity of the task, but let’s say an average task can be performed quite well here.

Kenton Williston: So, just to recap what I’ve heard so far when we’re talking about the Elkhart Lake platform, which is also the new Atom 6000 family, as well as some other Celeron and other brand names that go along with that.

Some of the things that make this pretty interesting, and different from what’s been around before include:

  • You’ve got up to four cores, with a pretty significant improvement in performance from previous generations.
  • You’ve got the GPU, which is quite a dramatic improvement, and useful not only for creating visuals but also doing things like executing AI.
  • You’ve got some improved IO that’s going to be really useful for IoT use cases, like PCIe, the faster flash, the ECC memory.

And all these things are pretty amazing.

But, I think, just as important are the off-system real-time capabilities. I wonder if you could say a little bit more about what your customers are asking for there, and what you’re providing. What do you see as being new, in terms of having that time-coordinated computing?

Built-in TSN Simplifies Networking

Christian Eder: The big advantage of the time synchronous network is that you can utilize your existing Ethernet infrastructure, or you can at least use the cables that you have to upgrade some switches. In a nutshell, you have a standard Ethernet cable, and you reserve part of the bandwidth of this cable for real-time traffic.

We’ve done a demo at trade shows where we have a traffic generator, which can really overload the cable. And in parallel we’ve reserved about 20% of the bandwidth—let’s say it’s 800 kb here, just for normal traffic, let’s say, streaming videos. And for the real-time control to communicate with the other robots, things must be in real time.

There was a reserved channel bandwidth of about 200 kb. And no matter how much streaming traffic we put on to it, there was no really recognizable jitter or delay. It just went smoothly through it, while the other channel was completely overloaded and the video was no longer running, because the channel was so full.

This means you can use the existing infrastructure to share the existing bandwidth between normal and real-time traffic. And I believe that’s a big advantage over a lot of different existing filter standards. Here, the big advantage of TSN is the existing infrastructure.

Getting A Quick Start with Single Board Computers

Kenton Williston: How can developers and engineers get started most quickly and effectively in taking advantage of all these new capabilities. What would you recommend?

Christian Eder: If you want to start working with Elkhart Lake or with the new Atom series x6000, the easiest way is to do it on a single-board computer. We have implemented this on a quite tiny PicoITX board, which is just 72-times-a-hundred millimeters, with two Ethernet ports on it. It’s just plug and play; you can start immediately. Of course, we offer this also in other form factors depending on customers’ experience or on their design history.

Kenton Williston: What are some of the target applications that you think these solutions would be particularly well suited to?

Christian Eder: There are a lot of use cases in the medical environment, but also some things you don’t think about—in the gaming industry, or even in the audio industry. You see these things all over the place, when it comes to graphic outputs—something like digital signage in trains, in the airports, whatnot. There are graphics capabilities, and the high resolution of this Atom is quite helpful.

Also, the low power consumption is always helpful in each and every environment. So, the beauty of the computer module is that everything compute-oriented is there, as well as the customization to the single applications or business segments that have happened on the carrier board. And this is absolutely flexible.

Taking a Look at Tiger Lake

Kenton Williston: Intel has also announced their latest and greatest in the Core series, the Core 11—also known as Tiger Lake. Can you tell me what’s new with Tiger Lake?

Christian Eder: Congatec is a very industrial-oriented company, so the biggest thing is the industrial use case here. It’s the first time that we have Intel Core processors that are fully specified for -40 up to +85 centigrade. This wide range—extended industrial, or whatever you want to call it, temperature range—is, of course, on top of all of the architectural and performance levels. This is, for us, the most important point.

Kenton Williston: I’ve been seeing reviews of the Tiger Lake, 11th Gen Core family, from the consumer side of things, and people have been very, very impressed with the performance they’re getting out of the graphics engine—really citing it as a huge leap in performance from the previous integrated graphics solutions. But it feels to me like, in a lot of ways, these two platforms—Elkhart Lake and Tiger Lake—are both culminations bringing together a lot of different important technologies, that individually may not be that exciting, but together feel like they add up to a pretty dramatic set of improvements. Would you agree with that?

Christian Eder: Absolutely. It’s a major step on the low-power side with the new Atom, and in the higher power envelope, the higher performance level, with 11th Gen first-time industrial. Also having all of these on modules you can upgrade and bring to applications very fast.

And this is the whole idea behind Congatec—to bring this technology very simply and easily to the customers. Because the most important thing nowadays, I believe, is the time to market. So the faster or the better we can support a customer, the faster the application will be in the market and the more successful it will be for the customer and for Congatec.

Kenton Williston: Is there anything you’d like to add?

Christian Eder: I think with these two launches we have a complete refresh of the whole platform. And I’m pretty sure each and every customer will find the advantages to stepping up to this new technology and performance level.

Q&A: Secrets of Rugged AI

The challenges in making AI and machine learning work smoothly are formidable enough—now make it run in environments that include everything from sand and dust, to unexpected elevators. We asked Johnny Chen from OnLogic, a leader in high-performance IoT systems, to share his tips for deploying machine vision in rugged environments. He revealed the advantages of customized solutions over customized hardware, the specific challenges of operating at the edge, and the ways that taking shortcuts with your system can backfire.

(To listen to the full interview, check out our podcast on rugged AI design. Note that this article refers to OnLogic by its former name, Logic Supply.)

How to Choose the Right Hardware for Machine Vision

Kenton Williston: Can you tell me what your background is, and what your role is at Logic Supply?

Johnny Chen: Partnership and alliances is my current role. Previously, I was a solutions architect here at Logic Supply, working with customers, and designing both hardware and the software stack they would use.

Kenton Williston: What kinds of hardware are you seeing as being the most important in this marketplace?

Johnny Chen: We use everything from CPUs, GPUs, and VPUs. It really depends on the application. We’ve used almost every type of accelerator, including FPGAs .

Kenton Williston: It sounds like from your perspective there’s an emerging trend towards the more specialized processors coming out on top. Would you agree with that?

Johnny Chen: Oh, definitely. It used to be that everyone used GPUs, but I think, with the trend of everything getting more specialized—especially at the edge, where the environment is not as friendly, where you really don’t want a fan or you don’t want something high powered—that’s where the advantage of things like a VPU comes in. Something that was designed specifically for vision.

Kenton Williston: For a lot of applications, people will have existing equipment that they want to add vision capabilities to. So, are they retrofitting their existing compute hardware? Or are they ripping out and replacing and putting in something all new?

Johnny Chen: The most common approach I see is that they add these applications to old machines. Because these are large machines that will not be replaced because of the cost to replace them. But they’re adding in the compute function with the cameras to add vision capability.

Kenton Williston: If we’re putting a new box on top of an existing system, is this something where you would want to build a highly customized solution?

Johnny Chen: What we do quite a bit is a customized solution. It’s not a custom box per se. It’s a customized solution. We look at the customer’s environment, we look at what their needs are, what their end goals are. We use quite a bit of off-the-shelf product, plus some customization specifically for them. The cooling portion is a lot of times what we customize for the customer, depending on how much compute they need and what type of accelerators they are using.

Kenton Williston: Machine vision via edge processing clearly seems to be where everything is heading. Are there instances where things are being done in the data center or the cloud?

Johnny Chen: Well, that’s an interesting question. What we’re seeing is that people are running models at the edge. So, when you’re at the edge, you’re running the vision, your process, and data. But at the same time you’re collecting new data. As you’re collecting those new images and new data, you’re sending it back to the server side. And what the server side is doing is incorporating that new data into the model and creating new models, and then these models get smarter and smarter. As it gets more data, then it pushes back out to the edge system to run the new model. It works like a learning loop, right? The longer you have the system, the better the model gets.

The reason for the separation is couple of things. One, is that you may have multiple edge devices all collecting data, and then you centralize the data to create the new model. Then you push it all back out to those edge systems. Two, is to create new models. There is a lot of compute there, and that is not necessarily what you want to put at the edge. At the edge is where you just want to run models.

Kenton Williston: What kind of tools and frameworks are people using to actually do all of this?

Johnny Chen: There’s a movement towards things like OpenVINO. One of the main advantages of OpenVINO is that it allows you to basically write your code once. So, what that means is once I write my code, I can actually assign it to different compute that’s available to that system. It’s very adaptable. That gives you a big advantage, because there’s never going to be one machine that fits all environments. This allows you to basically deploy your code on multiple different hardware.

Kenton Williston: If I put myself in the developer’s shoes I could have some understandable skepticism about how well this is really going to work in practice. Am I going to get really sub-optimal performance such that I’m going to wish I had taken a more hardcore, highly optimized approach?

Johnny Chen: If you have a very clear vision of what the end goal is for that code, you put it through the translator and you assign it to the right compute or the right type of compute, and know how much real time you need it to be—it will work optimally for that compute. That’s the whole idea of OpenVINO, because the flexibility lets me use different hardware in combination with each other, and they work in parallel. That’s the best part. I can pick things to run the right way for what I need it to do.

Kenton Williston: One of the most popular ways that people have been approaching the machine vision problem to date has been to use graphic processors, GPUs. I think there’s a lot to say about the performance that they offer, but when I talk to folks in the industry, it seems like it’s not so much the performance that is the challenge as it is power consumption and costs. Would you agree with that?

Johnny Chen: Absolutely. The problem is with the power of the GPU: these things run over 150 watts of power. Imagine if you were deploying this in real life. Here’s a warehouse where you need 10 or 20 of these systems. Using 150 watts-plus doesn’t make sense, as the cost of utility is going to be extremely high. Plus, you have fans. We’re also talking about reliability. There’s a fan that could fail—especially in an industrial environment where there’s a lot of sand or there’s a lot of dust in the air, especially carbon dust, which is conductive. These things are all things that will destroy the system.

Now imagine if you build specialized hardware using VPU—using the CPU, using the built-in GPU—and cut that down to maybe 30 watts. You’re looking at tremendous savings across the board, not just from a power consumption point of view, but also reliability. This is why I think it’s so important that we work with our clients to make sure we understand: “What is their end goal? What is an application they’re using it for?” And then this way we can look at it and design for that.

Kenton Williston: One of the things I’m also wondering about is the hacker approach to solving these problems. Like maybe there’s some kind of sneaky way they can deal with a power problem by just saying, “I won’t cool it sufficiently and I’ll hope it’ll last.” Do you see people taking shortcuts like that that you think they should avoid?

Johnny Chen: I’ve seen some interesting shortcuts in my time here. There are people who will say, well, if it’s in that type of environment where it’s very dusty, I’ll just do liquid cooling. Okay, interesting idea. But again, you’re taking something that is a commercial approach into an industrial environment. Yes, liquid cooling does work. Liquid loops work very well, but the costs and the maintenance of those are very high. It works until there’s a leak or something happens, and then they lose a whole day of productivity. And edge systems are typically systems where you really want to put it there, and you want to forget about it.

Kenton Williston: Everything we’ve said so far—would you say it really applies across the board for pretty much all industrial use cases for machine vision? Or are there different approaches you need to consider for different specific applications?

Johnny Chen: Well, you definitely want to consider quite a few things. An interesting example is autonomous robots. There’s one great application that one of our customers use it for, that is for cleaning robots. These are robots that basically roam through the building and clean the building. It’ll even ride the elevator and go back to charge itself 24/7, and it avoids people. Now when they started the project, they used a commercial, off-the-shelf system. They ran a GPU—they did everything exactly the way most people thought they should do it. But they were running into a lot of failures, because even though you would think that a commercial system would be perfectly fine because it’s just roaming the halls, interestingly enough, there are quite a few bumps that the robot has to go over, like going into the elevator.

And these robots are pretty heavy. So the shock and vibration were actually making the systems fail. Everything from the fan to the GPU and so forth. So, we started working with them. We moved them from using a GPU to using Movidius. And so far they have no failures. It’s these little things that you don’t think about—over time it wears out a machine. So, once you go into a machine, once you move to a compute that is basically fanless, sealed, you don’t have to worry about the environment that it’s in, and you don’t have those failures. So that was an application in which we worked with them to pick the right hardware, and to put together the right custom system to fit that need.

Kenton Williston: I’m wondering where you see things going next.

Johnny Chen: I see things moving faster. I see more ASICs, like GPUs or VPUs, as in addition to the host processor. These are things that will work together. I see more integration. Technology really has to become invisible in order to gain larger acceptance, just in everyday things. It has to integrate into our daily lives without us thinking about it, and that’s already starting to happen. Imagine your machines, your home appliances, your work machines—they will all be smart enough eventually to be able to tell you when they need to be maintained, what part needs to be maintained, instead of running a standard maintenance.

Kenton Williston: Are there ways you see the software stack changing going forward?

Johnny Chen: I think that the two biggest things are optimization and efficiency. That is the key to what I talked about—about technology becoming invisible. Hardware has to work together with software to maximize that efficiency. This way, you keep enough overhead for future development, but you use enough of what you need today without wasting.

Kenton Williston: What can developers do to stay on top of all these changes? And, perhaps more importantly, what can they do to future-proof their designs?

Johnny Chen: Keep it simple. Start with a clear vision of what the end goal has to be. But at the same time, you should roadmap out the additional features that will be added at a later date, and then architect a hardware to meet those additional features. And it’s going to be a balance. This is where hardware and software have to work together to understand each other’s needs.

A Guaranteed Model for Machine Learning

On the factory floor, wasted resources stack up fast for every real or imagined defect. When a good part is mistakenly labeled flawed, there’s lost time, efficiency, and machine effort. And when a defective part goes unnoticed and becomes the end customer’s problem? The potential consequences are even more severe.

Luckily, humans are expert defect detectors. But manual QA is slow, and automating it has been a challenge. Computer vision (CV) has the potential to match human accuracy, but it’s almost impossible to describe what’s immediately obvious to a person—say, the difference between a black mark and a piece of fuzz—in language a traditional CV system can understand.

Deep learning, where machines learn directly from people through labeled datasets, solves both problems. It raises the accuracy of CV to human standards while increasing efficiency and cutting costs. But to use it, manufacturers and SIs need solution providers who are experts in the technology and in its execution on the shop floor.

Machine Learning Starts with People

Mariner, a provider of technology solutions that leverage IoT, AI, and deep learning, knows that the people who will use and benefit from the solution must be involved from the start. That means a deep respect for the manufacturer’s experience on the provider’s part, and a commitment to collaborating with the people on the shop floor at every stage of deployment.

“First and foremost, you need to work with the customer to be sure you’re solving a real problem. Not just working on an AI science experiment because it’s cool,” says Peter Darragh, Executive Vice President of Product Engineering at Mariner. But the collaboration needs to be ongoing, not intermittent.

For example, Spyglass Visual Inspection (SVI) catches defects faster than expert human inspectors, and with equal or better accuracy, because those experts are the ones labeling the images used to train it. Darragh says it’s as if these inspectors had switched roles from athlete to coach.

“When they provide high-quality, labeled datasets that include all the nuances they see on the line every day, they’re no longer playing the game. They’re teaching the deep learning to play instead,” he says. And when something changes—a new customer has different quality standards, for example—the model can be retrained to adapt.

QA experts teach deep learning to play by providing high-quality, labeled datasets that include all the nuances they see on the production line. @MarinerLLC

Smart Partnership, Smart Factory

But understanding the technology isn’t enough. Providers also need to know how to deploy it in a real factory environment. “Lately we’ve seen huge interest in deep learning and an explosion of case studies,” Darragh explains. “But these projects tend to be done offline in a controlled lab environment.”

That’s why for the SVI solution, Mariner focused on automating a process to train and deliver models to the factory edge from the start. In this way, it can respond gracefully to all the inevitable changes in a typical production environment (Video 1).

Video 1. Mariner automates defect detection by adding deep learning to manufacturers’ existing CV systems. (Source: Mariner)

To guarantee that SVI works well for every end customer (or their money back), Mariner follows a strict implementation process:

  • Actively search for risks and validate that the problem is well suited for deep learning.
  • Train the end customer on how to provide a series of high-quality labeled images.
  • Use AI expertise to train a preliminary model from the initial set of images during a first consultation to confirm future operational success.
  • Collaborate with the customer to mitigate any risks before agreeing on acceptance criteria.
  • Continue to monitor model confidence after deployment, and assess the need for retraining.

A mix of carefully chosen tech elements make up the solution. It includes a containerized micro-service architecture at the edge in case network connectivity is lost. And Microsoft Azure offers a rich set of reliable cloud services that are easily scaled up and down.

“In fact, sometimes it’s as simple as moving a slider on the screen,” says Darragh, which allows Mariner to focus on the deep learning and model delivery process without concerns about infrastructure. And with Intel®-based processing, cost per inference at the edge can be significantly cheaper, delivering faster ROI.

AI Expertise Can Save a Manufacturer Millions

One leading glass manufacturer struggled to automate its QA process with a traditional machine vision system. A human could easily recognize the difference between a drop of water and an edge chip, even from an image. But it wasn’t possible to write a specification that its CV system could understand.

In the end, the best it could do was make the system overly sensitive, resulting in an unsatisfactory rate of false positives. So Mariner showed the QA experts how to train a deep-learning model with high-quality labeled datasets, and those false positives were eliminated.

After verifying the model’s accuracy, the manufacturer started running SVI on many lines—processing tens of thousands of parts a day. Now the solution automatically sends signals to the PLC to control the downstream process, and the product is accepted or discarded based solely on its determination.

As a result, the customer has reduced quarterly operating expenses by more than $1M, and plans to scale Spyglass Visual Inspection into other divisions serving four different markets, more proof that deep learning in the right hands, used to solve the right problems, is the key to reaping extensive rewards from machine vision applications.

Systems Integrators Profit From IIoT Tech

The latest smart factory technologies, such as artificial intelligence and machine vision, are somewhat new to manufacturers in Indonesia. But when the pandemic made remote operations and automatic monitoring more important, requests for IIoT capabilities started pouring in.

Implementation is always a challenge, though. This isn’t plug-and-play technology, and industrial operations engineers often lack the skills to successfully deploy it themselves. Instead, they rely on systems integrators (SIs) who know to install the hardware, software, and networks required to maintain factory operations.

Significant new revenue opportunities await SIs who can provide industrial organizations with the cost savings and end-to-end smart factory solutions they need. But to do that, SIs need specialized skills themselves. That’s where solution aggregators come in.

A Crash Course in IIoT and AI Technology

Synnex Metrodata Indonesia (SMI), an IoT solutions aggregator, offers end-to-end solutions and training programs to deploy them that help SIs get up to speed fast. Herianto, Director of IoT and Cloud Business Development at SMI, says there are two types of SIs in Indonesia who need the company’s experience and expertise: Operational technology (OT) experts and IT service providers.

But those who deploy OT systems aren’t familiar with the IT integration required for the digital transformation projects their customers need. And SIs specializing in IT may lack a detailed understanding of the manufacturing side of business operations.

“To deliver true end-to-end solutions, the OT and IT skillsets need to be combined,” explains Herianto. So SMI helps upgrade OT SIs’ IT skills, enabling them to implement sophisticated smart factory solutions. And the company teaches IT-focused SIs about OT.

SMI’s trainings are delivered via targeted workshops that explain how to use technologies such as the Intel® OpenVINO Toolkit for AI and machine vision applications. In these courses, SIs gain the skills to develop and customize solutions for each customer—or even build products themselves.

Workshops help SIs learn how to use AI and MV technologies and gain the skills to develop and customize solutions for each customer—or even build products themselves.

Local Support for IoT Projects

In addition to up-leveling their skills, SIs who work with aggregators benefit from access to logistics, services, and support. “Even for pre-built, application-ready solutions, you need an engineer to prepare and tailor a proof-of-concept deployment,” notes Herianto. “Manufacturers don’t trust what they see in a video. They want to see the product operating in their own environment.”

That’s why SMI makes sure to have the right partners on the ground for every POC and deployment. If the SI doesn’t have its own engineers, SMI will send theirs.

Herianto also stresses the importance of local support. “Without in-country logistics and personnel, the customer could wait two or three days for a response,” he says. In the digital age, manufacturers can’t afford to wait that long.

The Best IoT Tools

Another key to IIoT success is streamlined implementation. SMI offers edge-to-cloud solutions like ADLINK Vizi-AI—an industrial machine vision starter development kit. The solution has an intuitive user interface and comes with a range of pre-built common OpenVINO AI models, so SIs don’t have to start from zero when they want to deploy and improve computer vision applications. “Adoption complexity is reduced with this solution,” says Herianto.

Vizi-AI is a scalable starting point for edge AI industrial applications, combining all the hardware and software necessary for SIs to get started fast. It allows data to flow freely and securely and can be quickly connected to different image capture devices.

And rather than making SIs source and acquire a bunch of separate hardware components, Vizi-AI includes everything they need in the box. So the only thing left to do is develop and customize the software, and manufacturers can start collecting training data and build a scalable AI model right away.

ADLINK Edge software also enables remote management, so SMI can connect manufacturers to various cloud services with a dedicated support team.

Industrial IoT in Action

In one example, SMI worked with an SI partner to develop machine vision and AI-based automated quality control for a customer in the agriculture industry. Instead of waiting until the end of production to do manual inspections, the customer was able to remove poor-quality products before they entered the production line, cutting operation costs and improving efficiency.

With solutions aggregators like SMI, SIs can bring manufacturers cutting-edge technologies and the skills to deploy smart factory solutions. In the process, they transform their own businesses as much as their customers’.

Build ML Models with a No-Code Platform

When we think of the Internet of Things (IoT), livestock don’t usually come to mind. Nor do strawberries, fish, medical patients, or volatile gas leaks. But advances in AI-based computer vision and edge computing have made solutions like the “internet of cattle” real, and they’re rounding up true business benefits. Automated management lets protein producers monitor livestock health and location in real time—preventing the spread of disease, mitigating loss, and optimizing breeding and birthing practices.

But integrating the AIoT, computer vision (CV), and machine learning (ML) tools behind the “internet of cattle,” or any worthwhile ML project, can be challenging.

Typically, a video management system connects the cameras and collects data, which ML engineers and data scientists use to create the models. And a whole other set of tools cuts the video frames, labels and trains the datasets, deploys the models, and monitors the models’ accuracy. Then there’s the difficulty of creating a high-quality dataset to begin with.

Sixgill, LLC, an ML lifecycle management provider for AIoT platforms, is making it easier. “What IoT developers really need are tools that are easy to deploy, manage, and adjust—and that don’t require months of heavy development investment upfront,” says Elizabeth Spears, Chief Product Officer at Sixgill. And until recently, those tools haven’t existed.

Machine Learning Doesn’t Need to Be So Hard

When Sixgill saw that the available ML tools were hard to use and implement consistently across teams and enterprises, they knew getting good data was a big part of the problem. Consider a cattle-counting use case.

The model might perform well as long as the cattle look and behave as they did in the images used to train it. But if the images were captured in the summer, what happens when it snows? Or when the cattle are backing up instead of moving forward? The model will start to fail unless it can be quickly retrained to recognize cattle in all their various states.

Building a high-quality dataset that accounts for these kinds of exceptions can be tedious. Data scientists or ML engineers are often tasked with labeling images, or it can be contracted out. But both routes are inefficient and costly. Their expertise can be more valuably used in ways other than building real-time video streaming and automating it with a data labeler, for example.

So why not put the labeling tool directly in SMEs’ hands? “By making data preparation easy and accessible for nontechnical SMEs, they can give crystal-clear examples and bring the accuracy level of the whole project way up,” says Spears. And organizing the data becomes much easier with features like anomaly detection, where new incoming data automatically triggers a prompt: “Do you want to label this?”

Sixgill knew it was possible to develop a streamlined tool that would take any user—engineer, data scientist, or IoT developer—from zero to a fully functional ML model fast. So the company built the Sixgill Sense platform that integrates every step in the image-based ML lifecycle.

Building a no-code, end-to-end ML model with video is simple enough for the business user and powerful enough for the ML engineer. @SixgillTech

The Power of Automatic Object Recognition

The Sixgill livestock management customer initially explored AI for more-accurate livestock counting. The manual process was costing it nearly $90M per year in revenue leakage, but the ML solution a major cloud provider built for it didn’t perform any better—even after one year.

But when Sixgill took over the project, it trained a model to 99.7% accuracy in three weeks—saving the customer an estimated $52M per year. The platform made it possible with:

  • Monitoring: Collecting and normalizing data from video cameras and other sensor devices with imagery labeling for high-quality training datasets.
  • Counting: Deploying ML models trained for environments and situations to automatically detect, track, and count livestock.
  • Benchmarking: Automating ML model performance monitoring via metrics benchmarking for online learning.
  • Analysis: Sending counts and predictions to the cloud for further analysis and display via a centralized dashboard.

No-Code Platform

Sense effectively replaces several inefficient processes. For example, rather than moving data from your IoT devices into a separate labeling tool, with Sense it’s already where it needs to be—a big timesaver when maintaining the model’s accuracy requires continuous experimentation. “With all the data and models in one place, you can iterate on them really quickly,” says Spears.

Sense takes advantage of the power of edge devices and Intel® accelerated ML capabilities. And it makes collaboration easy with a visual UX that caters to data scientists, SMEs, and business users. So tasks that used to be overly complex and time-consuming are reduced to a few clicks, and models can be trained and tested quickly.

“Anyone can build a no-code, end-to-end ML model with video through this platform,” adds Spears. “It’s simple enough for the business user and powerful enough for the ML engineer.” And to give IoT developers everything they need to be successful, Sixgill regularly holds events, offers tutorials, and can provide customized training programs on computer vision and labeling.

The “internet of cattle” is a clear example of the value of tools designed with the end-user in mind. But it’s certainly not the only one. Companies in manufacturing, retail, life sciences, and other industries stand to increase revenue and reduce expenses when they leverage AI, ML, and powerful edge compute.

Health and Safety: Priority #1 in the Smart Factory

No matter the industry, the most important objective for any business is to keep its employees and customers safe. But COVID-19 has added a degree of difficulty to this mission. To prevent the spread of this ongoing virus, organizations need to review and update their standard business procedures—offering transparency into how they’re achieving their goals.

Early on, many companies turned to quick fixes such as temperature guns for frontline detection. But such labor-intensive methods have proven to be invasive, difficult and expensive to scale, and put workers at risk by requiring them to be in close contact with others.

“Business leaders knew they needed to find a better system,” says Justin Bean, global director of marketing for Hitachi Vantara’s Smart Spaces and Lumada Video Insights. “They were saying, ‘How can we do our part, comply with the rules and regulations, and keep people safe?’ Not only is it the right thing to do; it also makes good business sense.”

To help its customers do just this, Hitachi Vantara, a leader in IoT and digital innovation, combines sensors with real-time video analytics and cloud-based data management. The company’s Digital Health and Safety Solution—an intuitive automated set of technologies—helps companies mitigate the risks of COVID-19. Combined with its broad digital technology solutions and strategy consulting, customers can achieve broader operational goals.

AI and Computer Vision Make the Smart Factory Safer

The first layer of defense is to flag people with elevated body temperature. Smart cameras equipped with thermal sensors put intelligence to work at the edge, monitoring the infrared spectrum of a person’s face as they walk into a facility. Scanning multiple facial points, AI can get a more accurate measurement of body temperature than simple infrared measurements. If it detects someone with an elevated reading, an alert is sent, allowing organizations to take appropriate action, such as administering a secondary test, or asking the person to quarantine.

In addition, the detection solution runs analytics to identify which shifts may be impacted so organizations can initiate action to prevent further spread. The data around the incidents, which includes video clips, images, PDFs, audio files or interviews, and test results, is stored in a digital archive where it can be organized by case and pulled up and shared with the appropriate people. And the same technology can be used to sense other criteria, such as if employees are wearing the right PPE or masks during COVID-19.

3D-Lidar generates an accurate 3D point cloud that creates a depiction of what’s happening in real time—without collecting personal information.

Preserve Procedures While Protecting Privacy

3D-Lidar is also a critical layer of protection. “3D-Lidar is like sonar, but it uses lasers,” explains Bean. “It measures the time of flight of those lasers and how long it takes them to bounce back. It also generates a very accurate 3D point cloud, creating a depiction of what’s happening in real time as people and objects move, without collecting personal information.”

For example, Lidar technology can be used to verify proper handwashing procedures for healthcare or food service workers. Since no personally identifiable information is collected, privacy is maintained and this opens up new use cases that would otherwise be too sensitive. It can also deliver information about proper social distancing by mapping a person’s journey throughout a facility. Lidar can even provide alerts for areas where social distancing is an issue. This enables organizations to target educational campaigns, and redesign or restrict capacity in that space to allow more distance.

While these tools are being used to address the risks of COVID-19, they also serve plenty of uses for general health, safety and environment (HS&E) monitoring. For example, the solution can detect whether employees are wearing helmets or gloves on the factory floor and monitor whether they’re following correct protocols and procedures.

Beyond the Technology

Sensors and edge gateways collect and process the data, but the greatest potential is in how this information can be used. “Data is displayed geospatially and graphically to ‘slice and dice it’ in whatever ways that customers need to enable smarter decisions and processes,” says Bean. For example, analytics can help managers understand where compliance is not being met and dig deeper into operational procedures to identify and correct areas of risk.

To make the most of the technology, companies need the right strategy to help them use move forward. Hitachi Vantara provides a combination of strategic consulting services and proven solutions to help companies improve processes, culture, and financial outcomes.

“A big part of this is setting the right strategy so that you can improve processes and systems,” says Bean. “This can help organizations improve change management to drive successful and sustainable results.”

The company’s partnership with Intel® helps maximize the solution value.

“Intel is at the core of how we’re able to perform the intelligence,” says Bean. “The raw data itself is not that useful. With tools like AI that analyze the video and 3D-Lidar data, we’re able to run more sophisticated analytics to give us new types of insights. Intel provides the platform to do that type of processing and to do it at the edge – improving speed, reducing costs, and further protecting privacy. Not only is it important to protect human health and safety and the livelihoods of our people, it also makes business sense.”

Smart Substations Transform the Grid

Utilities face multiple challenges as uncertainties linger, from global energy prices to the changing competitive landscape. And while the rapid growth of renewable energy adds more opportunities, it also adds more complexity. These dynamics require a balancing act, driving the need for new technologies and business models for operators.

For example, today’s grid has been built around large-scale energy sources, designed for one-direction production and distribution. But the future looks much different, with a more diverse landscape that utilities must consider. Increased use of distributed energy sources (DERs), electric vehicles, and renewables requires significant changes in how the power grid is designed, secured, and managed.

As customers become energy generators, the demand curve is changing, causing the one-direction model to no longer be sustainable. A sustainable energy ecosystem requires moving to a two-way exchange of power.

Working hand in hand with Intel®, Capgemini, a global leader in digital transformation, technology, and engineering services, is delivering on these needs with its Substation and Edge-of-the-Grid Automation solution for energy management—from consulting, to implementation, to lifecycle management.

The solution enables utilities to flatten the grid with fully virtualized and two-way multidirectional operations. This provides the ability to monitor and manage load and flow across all assets, simplify the energy ecosystem, prioritize the production and consumption of clean energy sources, and flatten the rate structure.

Start with Smart Substations

“The change begins at the substation, where utilities must manage energy more efficiently,” says Philippe Ravix, Capgemini’s XIoT Global Solution Leader. “Nothing is possible without a smart substation.”

But there are limitations with today’s substations. For example, each function—such as anomaly detection, voltage regulation, and load balancing, for example—use independent control systems. Operators struggle with managing proprietary solutions provided by multiple suppliers. Each has separate hardware to deploy, distinct interfaces, and takes up valuable real estate.

By virtualizing these functions, the Smart Substation integrates multiple systems into a single, agnostic platform (Figure 1).

Multiple hardware systems are integrated into one system and managed from a single platform.
Figure 1. Smart Substation virtualizes multiple functions into a single, agnostic platform. (Source: Capgemini)

The solution virtualizes and administers functions that can be delivered by any supplier. This helps a utility easily deploy and operate new features—from the edge down to the secondary substation.

“It’s about the convergence of IT and operating technologies (OT), but you need to keep the OT requirement, which requires a secure, deterministic, and robust edge platform,” explains Ravix. “We are providing this on an open platform with a reference architecture.”

This solution interoperability to the edge, along with central management, is a cost-efficiency key lever for both CAPEX and OPEX.

Integrators can work from a single template and more easily add new features—giving utilities the flexibility to choose any supplier according to both technical and business needs.

This @Capgemini solution virtualizes functions that can be delivered by any supplier—so utilities easily deploy and operate new features.

IoT Devices Enhance Automation

Long accustomed to maintaining a few large assets, utilities and service providers face deploying new types of resources, such as sensors and cameras that monitor equipment. And all these edge devices, from the smallest to the largest, must work together and connect to the network—even if they are not designed to do so.

“Now with the need for more safety and security, for example, you’ll have cameras in the transformer area, or additional sensors to protect personnel from risk,” explains Thierry Batut, Capgemini’s Director of Smart Energy Services. “You can deploy a huge number of small devices to enhance automation and provide new features.”

Capgemini makes integration between these different data sources easier by working with a mix of suppliers, to deliver new digital functionality that enables a smarter grid.

AI Technology for Actionable Intelligence

Multiple Intel technologies bring the hardware and software platforms required to provide smart substations with local intelligence. For example, Intel AI and machine learning capabilities enable real-time decision-making for predictive maintenance, which can prevent downtime and lower operating costs. “You have to enable and leverage edge artificial intelligence to bring more value for the grid operator and more autonomy to the system, but also for the clients and stakeholders on the network,” says Ravix.

“Intel has invented substation solutions to bring platforms with local intelligence and the ability to better monitor the energy market,” Ravix continues. “Our partnership allows Capgemini to deliver a complete platform that reliably delivers all the computing needs for automation at the edge in a way that’s scalable. And it’s been tested by leading utilities, which are seeing the opportunities it brings.”

Growing Smart Grids at Scale

In adding hundreds or thousands of substations at a time, utilities are working together to meet the needs of the rapidly changing energy environment. “Big distribution companies combine their experience, their intelligence, and the progress they are making through experimentation to mature this technology and leverage it at scale,” Ravix says.

Modernizing the grid via smart substations offers utilities several positive business outcomes, including investment planning, asset lifecycle improvement, cost savings, and the possibility of additional revenue streams. Deploying the latest technologies such as AI, computer vision, and machine learning creates new efficiencies and a more resilient network. With its Intel partnership, Capgemini is making this a reality.

AI and CV Get Business Back to Work

Across most industries, the path to digital transformation has been straight and steady. But for many organizations, pandemic response and recovery has put the transition into overdrive. The good news? Technologies already deployed for business innovation can also be used to keep people safe, and move daily operations forward—in the office, in the warehouse, and on the factory floor.

Take a manufacturing application, for example. Production operations using computer vision and AI for predictive maintenance can be expanded for additional use cases. This means companies can turn on a dime to automate health and safety protocols at scale with the infrastructure already in place.

Insight Enterprises, a global technology solutions provider, has done just that. The company uses its own Internet of Things Connected Platform for Detection and Prevention to keep many of its offices and warehouses open—minimizing business disruptions practically since day one of the pandemic.

The solution’s foundational technology—already deployed across many smart public-space use cases—is the reason Insight could build a health and safety application so fast.

“We already had the infrastructure in place for IoT device health, remote reset, calibration, management, and more,” explains Jeff Dodge, Director of Insight Enterprises Digital Innovation Solutions Division. “So it just became a question of how to extract additional insights from temperature-sensing devices, and then use that operational intelligence as needed. And the Connected Platform was literally already built for this.”

“We already had the infrastructure in place for IoT device health, remote reset, calibration, management so it just became a question of how to extract additional insights as needed.” —Jeff Dodge, Insight Enterprises

New Safety Protocols Depend on AI Tech

Insight’s edge-to-cloud strategy is an important element of the solution, especially for its customers in manufacturing, food processing, or pharmaceuticals, where employees need to be able to collaborate in person and feel safe doing it. “It’s not practical or reliable to have every device individually communicating to the cloud,” explains Dodge. “If the internet goes down, there are bandwidth and latency constraints and storage costs.”

It’s no surprise that Insight uses the solution itself. “Just like our customers, these tools are necessary not only to ensure the security and physical well-being of our staff and loved ones but also their mental well-being,” adds Dodge.

The key is the company’s comprehensive approach, which starts with a brief health assessment that employees take even before leaving home. When they arrive at work, thermal imaging cameras at entry points quickly and subtly check temperatures. And sensor technology helps enforce proper social distancing, mask wearing, and handwashing protocols. When there’s cause for concern, assigned parties receive automatic alerts so they can act quickly (Video 1).

Video 1. AI and CV technologies are key to safeguarding employee health. (Source: Insight Enterprises)

Collaborative Approach to Health Tech

To deliver a complete end-to-end system, Insight has strong partnerships with companies like Dell, HP, Intel®, Microsoft, and Bosch. “We’re able to deliver a higher-velocity solution by leveraging technology through our partners, as opposed to going to each new customer with a blank slate and asking what they wanted to build,” says Dodge.

The platform has a robust cloud infrastructure in Microsoft Azure, and a standard suite of Intel processor-based edge computing devices. These include point tools like thermal cameras, motion and people-counting cameras, Bluetooth-based wearables for contact tracing, and smart hand soap dispensers—all coordinated by edge gateways. For more advanced computer vision use cases, the system incorporates the Intel® OpenVINO Toolkit.

The solution is highly customizable, as well. “The Connected Platform wasn’t designed to be a ‘take it or leave it’ point solution,” says Dodge. If a customer wants thermal tracking or thermal sensing, for example, Insight has two options across a few different providers. One uses an access control panel in a kiosk, where a user walks up to it and follows screen prompts. Another uses an advanced camera that scans up to 40 people when they walk by (Figure 1). It can be positioned in an entryway or in a large room to scan groups passing through.

A thermal imaging camera can check individuals for fever even in a group of people while protecting personal privacy.
Figure 1. Thermal imaging cameras can screen one person or many for fever—without disrupting their activities. (Source: Insight Enterprises)

Delivering AI Tech at Scale

Customers also benefit from Insight’s role as a Super Solution Integrator (SSI). From strategy to deployment and follow-on services, Insight can procure all the hardware, cloud, and software components, as well as help design, modify, and scale them. What’s more, it has technicians around the globe to receive and install customized systems.

“Everything the customer needs comes pre-configured,” notes Dodge. “The operating system is already imaged on the device, and there are runbooks, manuals, and a mobile app with prerecorded troubleshooting and training videos. There’s also 24/7 support and a product team on call for escalations when the call center can’t help.”

Businesses that need their employees to collaborate at work can trust that they’re doing it safely with a flexible, end-to-end solution and round-the-clock support. And then they can start focusing on new opportunities: continuous business transformation, new revenue streams, and optimizing operations—today and into the future.