IoT Predictions: What to Expect in 2022 and Beyond

Martin Garner

[podcast player]

What can we expect from the IoT world in 2022? If the past two years taught us anything, it is that we cannot prepare for everything. But some trends and technologies can help guide our way. Consider how the rise of AI has pointed to more intelligent IoT solutions—making the tools easier for everyone to use. This, in turn, could result in stronger regulations or efforts for trustworthy AI.

Or think about how the move to a remote workforce as well as increased virtual care services point to a broader use of 5G to support home broadband and ensure connectivity going forward.

And there’s still so much more to look forward to. In this podcast, we talk about lessons learned in 2021, IoT technology trends to pay attention to in 2022, and how the IoT landscape will continue to evolve beyond next year.

Our Guest: CCS Insight

Our guest this episode is Martin Garner, COO and Head of IoT research for CCS Insight, where he focuses on the commercial and industrial side of IoT. Martin joined CCS Insight in 2009 with the desire to work with a smaller, independent firm focused both on quality and clients. Every year, CCS Insight publishes predictions on network technology, telecoms, and the enterprise. This is the 15th year that CCS Insight is publishing its predictions.

Martin answers our questions about:

  • (3:01) CCS Insight predictions in 2021: What went wrong and what went right
  • (8:06) Technology trends and predictions for 2022
  • (14:57) How the role of cloud players will evolve moving forward
  • (17:16) Where cloud-like experiences in on-premises infrastructure will fit into the landscape
  • (21:08) Where AI, machine learning, and computer vision are going in the future
  • (26:16) Efforts and impacts of democratizing AI
  • (28:01) How to address AI concerns
  • (30:32) Ongoing transformation of the healthcare industry
  • (34:36) The future of IoT and the intelligence of things

Related Content

To learn more about the future of IoT, read CCS Insight’s IoT predictions for 2022. For the latest innovations from CCS Insight, follow them on Twitter at @ccsinsight and on LinkedIn at CCS-Insight.

 

This podcast was edited by Christina Cardoza, Senior Editor for insight.tech.

Apple Podcasts  Spotify  Google Podcasts  

Transcript

Kenton Williston: Welcome to the IoT Chat, where we explore the trends that matter for consultants, systems integrators, and enterprises.  I’m Kenton Williston, the Editor-in-Chief of insight.tech. Every episode, we talk to a leading expert about the latest developments in the Internet of Things. Today, our guest is Martin Garner, the COO and Head of IOT research at the analyst firm CCS Insight.

They’ve just put out their predictions for 2022 and it is a fantastic read. You can actually go check it out for yourself on insight.tech right now. I am really looking forward to getting into the details of these predictions. So, Martin I would like to welcome you to the podcast.

Martin Garner: Thank you very much.

Kenton Williston: Tell me about your role at CCS Insights and what brought you to the firm?

Martin Garner: Sure, well I have two roles at CCS Insights. One is that I’m Head of IoT research where I focus mostly on the commercial and industrial side of IoT for that. I’m also COO here and I joined CCS Insights in 2009 after Ovin was sold to Informa Group and later became Omnia. I was chief of research there and the attraction of coming to CCS Insights was that it’s a smaller firm, but very quality and client focused and independent, and obviously being smaller, had very good growth opportunities. And I’m happy to say those are all still true 12 years later.

Kenton Williston: Excellent, so on that note, I’d like to know a little bit more about CCS itself, CCS Insights and its annual prediction. So what is this beast?

Martin Garner: So well CCS Insights is a medium sized analyst firm covering quite a lot on the consumer side, very strong on the mobile technologies and devices, quite a lot on the telecoms side itself, the networks and the network technologies, and also strong on the enterprise side, how they use a lot of the technologies ranging from what happens in the workplace through to digital transformation of operations in the industrial world. And the predictions is something that we do each year. Last year in 2021, that was our 14th run of predictions. Now several analyst firms do these. What makes ours a little bit different is that we deliberately do it as a complete cross-company thing across all topic areas. And also all staff contribute to prediction. Some of our best ones historically have come from people who aren’t analysts at all. The other thing is that we carefully track what we get right and what we get wrong and we publish some of that each year. And the aim is to be quite transparent about that and to improve what we’re doing.

Kenton Williston: So one of the things I really like in what you just said is going back and revisiting your prior years’ predictions to see how things played out. That’s really great. The times we’re in have been very difficult to predict. I don’t think there’s any doubt about that. So very curious how the predictions for 2021 played out and what went right, what went wrong?

Martin Garner: Yeah, you’re right. That was a particularly interesting year because it was the first year we were in pandemic conditions. Lots to think about, lots to speculate about. We got a few that we were quite pleased we got right. One was that COVID would accelerate adoption of robots, automation, and IoT across sectors. Now it didn’t initially look like that. There was a pause in investment, but it did then accelerate as people realized they needed this stuff to keep their operations going. Another one was that 2021 would be the year of vertical clouds. And we have since then seen big launches from all of the major players here and that plays into what we’re doing in IoT. And another one was that security and privacy in AI and machine learning would become much stronger areas of concern. I think it’s now widely understood that machine learning is quite a big attack surface and it could be really hard to detect a hack, at least initially.

Now we did get a few wrong that year as well. So we did predict that somebody would buy Nokia and no one did. We also predicted that the regulation of the big tech players would slow down and countries would take more time. Actually in China it’s grown much stronger much more quickly. And that’s being echoed to some extent, both in the US and in Europe. So actually that’s moving faster than we expected. And then there’s a few that we’re waiting on, which were longer-term predictions. So for example, a big cloud player will offer the full range of mobile network solutions by 2025. Now we have seen some big moves in 5G from AWS, from Microsoft, and from Google, but nothing yet on quite that scale. Another one was that tiny AI would move up to 20% of all AI workloads. Now this is mostly an IoT thing where small edge devices really need small AI. There is a lot going on, especially in IoT and the role is growing, but we’re not at that level yet.

Kenton Williston: So one thing you mentioned there I’d love to get a little clarification on is what do you mean by vertical clouds?

Martin Garner: Sure, this is a cloud service. Many cloud services have been offered as a purely horizontal infrastructure thing, like data storage, which everybody has a need for, but actually each sector stores different types of data with different labels, different metadata, different language used even, and they measure things in different ways across sectors, even down to things like the impact of carbon footprints within the sector and so on. And what a number of the offerings from the cloud players are now doing is packaging those up in a way that’s suitable for manufacturing or for automotive or for retail or for healthcare, those kind of things, and deliberately fixing them in the right language, the right constructs, the right metadata and so on, so that they can be more easily adopted directly into specific verticals. It’s one thing I think to launch those services, it’s something else to get them all adopted across those sectors. That’s just a long road to get a big share of that going around the world. And we’re in that stage now.

Kenton Williston: Fascinating, and the funny thing is I think there’s been a lot of really interesting activity both on the cloud side, everything you’re describing about very industry-specific use case, specific activity happening in the cloud, and also just a tremendous amount of activity happening at the edge over the last year, and I think it will be pretty important going forward. So as we’re recording this, for example, just yesterday Intel® announced its latest core processors, which some of the things that are notable there are they’re offering a tremendous upgrade in performance for the edge as well as considerable advances in power efficiency and quite a bit of addition of AI capabilities, graphics, just all kinds of things that are happening. And you mentioned, for example, some of the things that we’re waiting on so to speak are AI at the edge, and there’s just so, so much of that happening at the edge. So it’s I think a really exciting transitional time right now.

And this is probably a good opportunity for me to mention, since I said something about Intel and its fabulous 12th generation core processors, that the insight.tech program and this podcast itself are Intel productions. So full disclosure there, but that leads me to looking forward into this next year with all this tremendous change that is happening in the technology space. What is on your mind for 2022?

Martin Garner: I think overall we have 99 predictions for 2022 and beyond. And we obviously can’t go through all of those here. What we did for this podcast is we did a cut of those that are relevant in some way for the IoT community, and we’ve packaged that up in a report which is available as a download from insight.tech. And I’ll just highlight a few that caught my attention, if that’s okay. So there were a few around the follow on from COVID, and a couple were that by 2025 there’ll be somewhat less use of office space in the developed world. We reckon it’ll be down about 25% by then. Also as a sort of balancing factor, there’ll be much more use of 5G as an additional home broadband for home working. We think maybe 10% of households will have that. I think we’ve all had the experience where you’re trying to do a Zoom call or a Teams call or a podcast and your broadband goes off, and it’s really, really frustrating.

So more backup there. We also saw, coming out of last year, much higher attention on sustainability, and we really think that clean cloud is going to be something of a battlefield this year, partly in cloud services. We also think that IoT can really benefit from using sustainability in its marketing. IoT is great news for sustainability, generally speaking, and we’re not mostly making enough use of that. We also think sustainability will be built into the specifications for 6G, when we get there. And then there’s quite a lot around IoT itself. So, much greater focus on software, machine learning, shift towards higher intelligence of things. Much greater linkage between smart grid and wide area networking. We actually expect to see a pan-utility, where one company is both an energy provider and a network provider doing both by 2025, because those two networks are becoming remarkably similar.

And then there’s also the arrival of antitrust cases in IoT, as a lot of IoT suppliers really like to lock down their maintenance contracts, and that’s attracting antitrust attention. And we think that people will need to move to an as-a-service–type business model in order to avoid antitrust attention. And then as you mentioned, lots and lots on edge computing and mobility. We think the two are going to cause quite a big change in terms of which suppliers do what things across enterprise, telecoms, computing, and internet services. We expect to see all the boundaries changing over the next few years, new players taking different roles and so on. So we think there’s a lot of change, a lot to look forward to, and of course some threats in there for traditional suppliers, but super interesting few years.

Kenton Williston: Yeah, for sure. So some of the things that stand out to me, and boy, there’s a lot to chew on here. I think you’re right about sustainability being a really big deal going forward. And I totally agree that we’ll see it everywhere. Myself, for example, recently taking a stroll down a street here in Oakland where I live, and I noticed that the lights were brightening as I was taking my evening stroll as I walked past them. Even just these little simple things can make a huge difference in energy consumption, and of course there’s much more sophisticated use cases beyond that.

Martin Garner: What we find is that with IoT, you’re often monitoring things that have never really been monitored before, like streetlights. And so the savings you can make by doing more intelligent things with them are just enormous.

Kenton Williston: Yeah, absolutely. One of the things that stands out to me is this idea of linking the smart grid with networking. And we actually did a podcast recently with ABB talking about this very idea. We need to have so many intelligent end points in the 5G network ,and presumably going forward in the 6G networks to support all of these small cells and private networks.

And it’s really similar for the smart grid where you need to push intelligence out to the edge to achieve sustainability and resilience. And of course, both applications need a combination of power and communication. So why not put the two together?

Martin Garner: I think that’s right. It’s the decentralization which is the big commonality, plus the kind of cloud architecture that they’re building in. So in the energy grid, you’ve got now lots and lots of smaller energy generators through solar and wind farms and so on at the edge, and they’re pushing energy into what used to be a very centralized system. And it’s an exact parallel with IoT. We’re generating so much data at the edge thanks to IoT, and we’re pushing that into the network, where we used to depend mostly on things like YouTube being streamed from the middle outward. And so it’s a big shift in both cases, and they’re very similar architecturally and topologically and we expect much more convergence across those two.

Kenton Williston: And I think that speaks also very much to the point you made about big changes are happening now in the who does what. So again, just thinking about some of the recent conversations we’ve had in our podcast series, we’ve had a conversation with Cisco, which I believe we’ll publish after this podcast, where they were talking about their efforts in the rail space with national rail transport there in the UK, and how the complexity of what needs to be done and the speed at which things need to be delivered has led them to work very closely with companies who in the very recent past they would’ve considered their competition.

Martin Garner: Right, and we also think that as we get a cloud architecture in a 5G network, then where is the boundary between the cloud where the data lives, and the cloud where you’re now generating the data which is part of the 5G network? I think it’s going to become a really fuzzy boundary, and that creates opportunities for specialist players who might only do edge cloud things and feed that into a telecoms network, or the other way around. We just think the whole who does what, and where are the boundaries, is going to become a much more sophisticated picture than we’ve had before.

Kenton Williston: Yes, for sure, and that leads me to a question that I’d like to dig into a little bit more deeply, about the role of the existing cloud players. We’ve got industry leaders like Amazon and Google and Microsoft, and they have undoubtedly greatly benefited from all the activity that’s been happening in our last couple of years, and I’d love to know a little bit more about how you see their role evolving as we move forward.

Martin Garner: It’s a great question. And we’ve already talked a little bit about the verticals, and one area where they’re all pushing very hard, one vertical is telecoms networks, and we’ve mentioned already that they’re doing more in the 5G world, especially as 5G moves from its current consumer phase more into an industrial phase. But I think one example that illustrates it very nicely is that if you are, say, a global automotive manufacturer and you want a 5G private network in all of your manufacturing sites across the globe, who is best placed to provide that? Well I don’t think it’s the local telco, because they’re not global enough. So it’s more likely to be your big cloud provider, and we think they’re going to become a really key distribution channel for some of the telecom products, even if they don’t offer them themselves on their own behalf. And I think this is a good example of where the domains between what the cloud providers do and what the telecom guys do are going to blur quite a lot over the coming years.

Kenton Williston: Yeah, no, that’s all very interesting. And I think your point about 5G is very well said. And of course we just talked recently to your colleague Richard about a CCS Insights prediction in the 5G space, and I think the evolution of that space is going to be incredibly important, both for the role of the cloud provider, and to your point there’s this whole new concept of a private cellular network that has come along with 5G that I think will be very, very important as we move forward. And much in that same vein, as we talked a little bit in that conversation, I’d love to hear more from your perspective how companies like HPE and Dell are starting to offer cloud-like experiences in the on-prem infrastructure, and where that will fit into the landscape going forward.

Martin Garner: Yeah, absolutely. And the cloud guys really have had a good run at this as far as we can tell, and we’re not expecting that to change much, but we do expect a bit of a shift going on, and now I know that some people think that the market anyway has a fashion swing between what’s centralized and what’s decentralized; what’s cloud, what’s on-prem. And what we’re now seeing is Dell, HP, and other computing providers, that they’re offering cloud-like experiences and they’re offering, this is really important, as a service-business model for on-premises computing so you don’t have to have the big capital costs in order to get started with quite a major computing program. You can do it all on OpEx. Now we’re all reinforcing that. We’re also seeing the big cloud providers offering local cloud containers in on-premises devices, AWS green grass, Azure stack, and so on, and they’re offering as-a-service hardware.

So that whole area is being fueled, and our expectation is that on-premises will, if anything, make a bit of a comeback and that will tend to slow the growth of public cloud, but definitely not stop it. And that’s a trend that’s not going away. Now we also think that IoT is a really, really big part of this because of the strength of edge computing, the fact that we’re generating such a lot of data in industrial IoT systems, and the fact that we need often to act on that data really quickly in, say, a process-control plant or something like that. We can’t do everything just in the cloud, we need the on-premises side, and as IoT grows and grows and grows, we think that will enhance that trend back towards a stronger on-premises suite.

Kenton Williston: Yeah, and I think one of the things that’s interesting there too, is, like you said, there definitely does tend to be a constant pendulum going back really basically to the earliest days of computing as to whether things were centralized or distributed. But I think one of the things a little bit different about our current situation is that the concepts of cloud architecture are showing up everywhere. So of course it’s in the public cloud, but also on-prem systems are starting to look very much like the cloud in terms of things like containers, but so are edge systems. And in fact, I think one of the most important things that’s happening right now from an architectural perspective is moving all of the software that you’re doing to the containerized, as-a-service cloud model so that you can, as these things continue to evolve and the workloads move from one place to another, have the flexibility to deploy these workloads in the public cloud, in a private cloud, on-prem, at the edge, wherever it makes the most sense for whatever you happen to be doing at the moment.

Martin Garner: And you can then manage them centrally. You can do things like optimization across computing stacks. And so it gives you a lot more flexibility.

Kenton Williston: Yes, yes, absolutely. And I think there’s some really good examples of this that are happening in, for example, the machine learning and AI space, where people are doing things like developing the models in the cloud and then bringing down the inference engines, which actually execute the work, into a more local environment, perhaps into an even very lightweight environment at the edge. And I think that’s a good place for me to ask you about where you see those technologies of AI, machine learning, and computer vision going in the future.

Martin Garner: Yeah, and another great question, and this links back to our idea that there’ll be a huge focus on the intelligence rather than the IoT itself. And what we see at the moment is that there’s a very strong focus on the tools for machine learning and AI, making it easier for ordinary engineers in ordinary companies around the world to choose algorithms and to set them up for use, and to build them into your development and your DevOps and things, and have a whole life cycle for your machine learning, just like you do with your other software and so on. But still, I think one of the things we’re seeing is that the machine learning and AI world is full of componenttechnologies. It’s very much similar to the IoT world the way it was a few years ago.

And so it’s actually really challenging for ordinary people to choose and use systems in that area. So we’re also expecting a lot more focus on providing finished systems for machine learning and AI, quite similar to the way Intel did market-ready solutions for IoT. We may even see some of the finished AI bundled into things like market-ready solutions increasingly. Now Intel’s not the only one. Others have made a start on this as well. For example, AWS has Panorama video analytics appliances, which you can just buy on Amazon and plug in, and they come with the algorithms and you can get going really very quickly. They do something similar for predictive maintenance with their monitron system. We also are expecting the role of smaller and specialist systems integrators to grow a lot here so that they can take on a lot of the training and configuration for you, because it’s still true that the widgets that you make in your factory are not the same as other people use.

And so you need to train the models on images of what you are doing. And there’s just a little caveat here, which is that it’s a large task to get thousands and thousands of specialist systems integrators who maybe they originally trained as installers for surveillance systems. They may not be very skilled in machine learning, but we have to get them up to speed in this area. We have to get them comfortable and competent in training on machine learning, because it’s going to be a big part of their role going forward. And then just one thing that follows on. You talked about AI at the edge and so on. One of our other predictions left over from a couple of years ago is that we will move over time toward much more distributed training rather than centralized training.

Kenton Williston: Yeah, so I think it’s all very interesting points. And I think one of the things that really strikes me here, it kind of goes back to the who is doing what, and the fact that we’re seeing technologies just become so pervasive everywhere you look. And people have been talking about, for example, this idea of digital transformation for some number of years, to the point that I think it’s kind of worn out its welcome to a certain extent, but it’s true that everything’s being digitized, and especially I think this year and going forward, people are looking at just increasingly everything’s connected, distributed intelligence everywhere. But this certainly does introduce a lot of complexity in who’s actually going to do this work of adding this intelligence everywhere, how do these systems all talk to one another. You mentioned, I think quite rightly, the challenges of when you start talking about AI, for example, you’ve got a lot of different point solutions, and how do you get these things all to work with each other?

And we had, for example, a very, very interesting conversation with a company called Plainsight. It’s one of our most recent podcasts here, talking about this very challenge that you’re not just going to have data scientists sitting about in every part of an organization, and in fact many organizations won’t have them at all. So how in the world do you go about actually deploying all these great AI capabilities that are out there right now? And so I agree that having trusted partners that enterprises can rely on like systems integrators will be very important going forward, and I think it will be, to your point, very important for folks who have been doing a lot of the physical installation of things and specialize in those sort of areas to team up with partners who really understand this technology in a deep way so they can go to their enterprise customers and do these very complex installations and integrations where you’re bringing a lot of different things together.

Martin Garner: Yeah, that’s right. And then having done that, you then need to trust it enough to run your operations off it. And that’s a different question, isn’t it?

Kenton Williston: Yes, absolutely is. And on that point, there are a lot of efforts happening right now to make especially the AI trustworthy and democratized so that it is more accessible and so that enterprises can put their trust in these systems, and I know that there are significant efforts happening from like IBM and Microsoft, AWS, and Google in these areas. Can you speak a little bit to where you see these efforts going and what kind impact they will have?

Martin Garner: I think this is one of the most fascinating areas in the whole tech sector at the moment. And for sure those players have been leading the technical development of AI and the tools around it, and things like TensorFlow and PyTorch and so on have had a huge impact in making all of this technology much more available and accessible to people who maybe aren’t fully schooled in the technology behind the scenes. And that really has helped the democratization. But I want to sound just a little bit of a warning here, because we think AI is a special category of technology where small assumptions or biases introduced by a designer or an engineer at the design stage can cause huge difficulties in society. We need more layers of support and regulation in place before we can all be comfortable that it’s being used appropriately and properly and we’re all technically competent and so on.

Kenton Williston: Yeah, for sure. And there’s good examples of that even in just our daily lives. There’s lots and lots of firsthand experiences we’re starting to have of AI not behaving the way we expect it. So I think you’re absolutely right that there is going to need to be a lot of work done to ensure that these systems are being used appropriately by good actors and are doing things that we expect them to do. That’s a pretty tough challenge.

Martin Garner: Yeah, and I think we can start to see what those need to be. And there are already quite a few initiatives across some of these areas. So one key aspect is the formation of ethics groups that are not tied to specific companies. I think we need to take away the commercial-profit focus, and focus purely on the ethics before we can really trust totally in that. It’s also clear that to build strong user trust, we’re going to need a mix of other things like external regulation. When you think about cars and traffic, there’s an awful lot of government regulation that goes with that. But we also need then industry best practices and standards, and we need sector-level certification of AI systems. A bit like crash testing of cars. We’re going to need something like that for AI systems.

Then we need to certify the practitioners. There have got to be professional qualifications for people who develop AI algorithms. Maybe we need a Hippocratic oath and things like that. There are all these layers that we’re going to need. They’re being developed and they’re being introduced, but we’re just not there yet. So one prediction in this area that we have is that 80% of large enterprises will formalize human oversight of their AI systems by 2024. In other words, we’re not just going to leave the AI to get on with it. We’re going to need AI compliance officers, we’re going to need QA departments. It’s going to be a whole layer of quality control that we put in place with human oversight before we let it loose.

Kenton Williston: Yeah, for sure. And one industry that comes to mind in particular here when we’re thinking about needing to take extra care to make sure our technology is doing what we want it to do is the healthcare industry. First of all, kudos to all the folks who’ve been working incredibly hard in the healthcare sector, not just the technologists, but the care providers. This has been such a difficult time. And I really cannot express enough gratitude for all the folks who have really just put everything on the line there. Really, really commendable, and a big part of that from the technology side is things like telehealth and telemedicine and virtual care in general have incredibly quickly accelerated, and I think it’s just an amazing accomplishment by everyone who’s been working on that space. But I think there’s a lot left to do still. And I think there are definitely questions in my mind about how do we keep pushing this forward in a way that’s going to be truly beneficial to everyone, patients and care providers alike.

Martin Garner: Yeah, exactly. And I echo your thanks to the healthcare systems in various countries around the world. The effort they’ve put in, the changes they’ve made, and the support they’ve given are unbelievable, and we owe them a huge debt of gratitude. But just coming back to the technology, there are a few things I think which stand out in terms of IoT and the adoption of machine learning and things like that, which we’re coming onto. So one is that healthcare, it’s very easy to talk about healthcare as if it was one thing, but it’s really not. It’s enormous and diverse. And it’s many, many different areas perhaps with different compliance requirements themselves. Also, I think as you mentioned, it’s been historically a bit slow to change, but COVID has really kick-started the adoption of a lot of new ways of doing things. And so we have made a lot of progress over the last two years, but my sense is there’s still a long, big shopping list of opportunities which are enabled by IoT or machine learning or AI that we haven’t really got going on in a big way yet.

And just one example I’ve come across is tracking machines in the hospital. Trying to find machines in the hospital can waste a lot of valuable time for doctors and nurses. And so hospitals often over provision: they put one machine per ward, when actually the usage doesn’t really support that. And it’s just wasteful. So if the machines could be tagged and geolocated within the whole hospital, then they become easy to find. We’ve seen examples where that generates capital savings of 10% to 20% on that type of machine, and that can be really significant amounts of money coming through. So we think there’s a lot more to come in this area, and the great news is that hospitals and the healthcare system is now in a place with change that is much more ready to adopt new systems.

Kenton Williston: Yeah, for sure. It’s interesting, I think. That point you made about the ability to even locate these devices is huge. Even beyond that, we’re seeing some of the stuff we’ve written about on insight.tech, things that are autonomous. I think healthcare settings are an extraordinarily good application for autonomous vehicles. Not in the sense of course like a car, but just self-guided nurse carts and drug-delivery systems and things like this so that you can, rather than have the providers go find these things, have them just directly come to the providers. It’s I think a really incredible opportunity there.

Martin Garner: Absolutely, along with some interesting challenges: how do they use the lift of the elevator that takes them up to the fourth floor? None of that comes easy, but it’s a great opportunity. You’re right.

Kenton Williston: Absolutely, and I should mention here too that I mentioned a couple of our earlier podcasts and forthcoming podcasts, and of course our listeners are very strongly encouraged to subscribe to this podcast series so they can keep up with all that. But I would also very strongly encourage our listeners to go check out insight.tech. There’s just a tremendous amount of very in-depth content on all these things we’ve been talking about, not least of which is the report that you yourself have created with these predictions for the coming year. Definitely worth taking a read of that for sure.

Martin Garner: Hope so.

Kenton Williston: I certainly think so. So on that point, I think a good place to wrap our conversation would be talking a little bit about the bigger picture of where you see things trending, and something that caught my attention was the idea that the Internet of Things will become more of an Intelligence of Things. So can you explain what that means to you, and why think this is happening?

Martin Garner: It’s interesting, isn’t it? I’ve always thought that the label Internet of Things, or IoT, is a bit of a rubbish label, because it really doesn’t describe the full complexity of what’s going on underneath. I think now though, there’s quite a good understanding that IoT is part of digital transformation. You mentioned that’s maybe an overused phrase, but we kind of know what it means. And it’s a big thing that’s going on. IoT is part of it, but actually very few people buy IoT. What they do is they buy a solution to a business issue. And somewhere inside that is IoT used as a technology to make it work. And the real value of IoT is not in the connection that we’ve created with the things, but it’s in how you use the data that you now have access to. And I think if you, if you think about a smart city, for example, with intelligent traffic management or air quality monitoring, then it’s quite obvious that you are more worried about the data than the connection.

And that’s where the value is. And it’s equally true with smaller systems like computer vision on a production line. You don’t care much about the camera, you do care about what it’s telling you, and that’s the distinction. The trouble is we are now generating so much of this data that we increasingly need lots of machine learning and AI to analyze it, and we have to do it at the edge to do it really quickly and so on. So getting the maximum value out of those systems is going to become all about the intelligence you can apply to the data. Probably a lot of that will be at the edge. Now we think there are going to be three main areas for this: obviously monitoring something is useful, but we still need good analytics to help us focus on the right data and not get distracted.

Controlling something is more useful with suitable intelligence, as we said, about streetlights and things like that, we can make huge savings by controlling these things better, but actually optimizing is even more useful. And again, with suitable intelligence, we can now optimize a machine, a system, or a whole supply chain, maybe in ways we never could before. So we think that the Internet of Things, we now understand pretty much what that is and how you go about it and there’s a lot of opportunity, but we understand it. We think that’s going to fade away as a term, and there’ll be much more focus on the intelligence, the way you use it, and the value you get out of exploiting the data you’ve got. Now when we think about specific sectors, like manufacturing or retail or healthcare, there are a few things that jump out. So it’s quite easy to get caught up in the detail of getting all of these things connected. Should we use Wi-Fi, should we use 5G, wired connections?

Of course that’s important, but only up to the point where it’s working, and then you can move on. We will need suitable systems for aggregating and analyzing the data, data lakes analytics, digital twins, machine learning, AI, and so on. And many, many companies are already well down this path, but actually there’s a lot to learn. Each of those areas is quite big and complicated, and you’ll need new technologies, new skills to get really good at those. But then the other bit is that, even assuming you get all of that done, really a lot of the value you get comes from then applying it across the organization and having it all adopted in the various systems that you use. And that’s a people issue more than a technology issue. And we’re back then to one of the truisms of digital transformation, which is that success depends on taking people with you more than on the technology that you’re using to make it all work. And I think that for me, that’s a really interesting point. It’s ultimately a people issue.

Kenton Williston: Yeah, I couldn’t agree more. And I think, to return to an earlier point, you were talking about the who does what, and I think it’s going to be incredibly important as we move forward into this increasingly complex world to have an ecosystem of players who you can count on, who understand the kind of challenges that your organization is facing, where the technology is heading, how to deploy these things. And one of the things we’ve talked an awful lot about on the insight.tech site is how to work with folks who’ve traditionally been thought of as merely distributors of technology.

I’m thinking of the Arrows and CENXs and Tech Datas of the world. Their role has changed a lot, to where they’re gaining an incredible amount of internal expertise on their customer needs. They’re able to provide these more complete Intel market-ready, solution-type solutions you mentioned, and are partnering very actively with the sort of systems integrators who are doing the physical installation and have those relationships with the enterprises. And I think it’s just going to be very important for all of these players to come together in a very collaborative way to really unleash all these possibilities we’ve been talking about today.

Martin Garner: I absolutely agree. And I think the ecosystem angle is a really important theme to bring out here. Very few companies can do this on their own, and most depend on working successfully with others. There’s also an interesting organizational point I think for a lot of IoT suppliers. From what I can tell, and I haven’t done a big survey on this yet, but from what I can tell, most IoT suppliers are 80% engineers working on the product and 20% other, which includes HR, marketing, sales, and so on and so on. I kind of think it needs to be the other way around. They need to have a big customer engagement group in there, where if you’re in healthcare, you employ ex-nurses and ex-doctors and what have you, who really understand what’s going on within the customer organizations and who feed that back into the product. And I think most IoT suppliers haven’t really got to that yet, but it’s something we see coming before too long.

Kenton Williston: Absolutely. So with that, Martin, I really want to thank you for your time and your insights today. This has been a really fascinating conversation.

Martin Garner: Well, and thank you too. And thank you to Intel for hosting this and for having me along. It’s always a pleasure dealing with you guys, and I hope it’s been an interesting session.

Kenton Williston: And thanks to our listeners for joining us. To keep up with the latest from CCS Insight, follow them on Twitter at @CCSInsight, and on LinkedIn at CCS-Insight.

If you enjoyed listening, please support us by subscribing and rating us on your favorite podcast app. This has been the IoT Chat. We’ll be back next time with more ideas from industry leaders at the forefront of IoT design. 

Smart Digital Signage Powers Up EV Charging Stations

As consumers and governments push for a cleaner, greener environment, sales of electric vehicles (EVs) are soaring and automakers are retooling factories to ramp up supply. What’s missing from the picture is charging stations. There aren’t nearly enough to meet the coming demand, and concerns about setup costs and profitability have made many business owners reluctant to install them.

Adding smart digital signage to charging kiosks resets the business model, allowing companies to recoup their costs while learning more about customers and increasing sales of other products.

“With digital signs, the EV charger becomes the means to an end. As people are getting a charge, they watch streaming content that can make money for the business.” says Chris Northrup, Vice President of Digital Media and Networking Strategies, at USSI Global, a broadcast, network, and digital signage solution provider.

EV charging kiosks with digital signage can be used by many types of businesses—not just service stations.

“A kiosk can be any place where people can park for 20 or 30 minutes,” Northrup says. “Quick-serve restaurants, supermarkets, shopping centers, movie theaters, hotels, theme parks—all are good candidates.”

The Key to Success: A Computer Vision System

The USSI Global EV charging kiosks are shaped like gas pumps, with 55-inch, attention-getting digital screens. The color display is designed to remain vivid even in bright sunlight.

But what really makes the screens effective is the computer vision (CV) technology behind them. A pinhole-sized, CV-enabled digital camera embedded in the screen collects footage of charging customers and passersby. AI algorithms running on Intel® processors analyze this information in real time, determining gender, relative age, and mood—and for charging customers, the type of vehicle they’re driving.

To maintain customer privacy, facial images are not stored on computers—only the digital information about them is collected and processed.

The algorithms then trigger sign content likely to appeal to individuals or groups watching the screen. For example, it might show Tesla accessories to a Tesla owner. Others may see demographic-based information about health or fashion products. The system measures how long people watch and whether they turn away, quickly changing content that isn’t deemed effective to something more suitable.

“The signs are smart enough to start playing more of the kind of content that has caught a user’s attention. So if someone is drawn to sports, it will start showing more Nike ads,” says Amanda Flynn, USSI Global Vice President of Customer Relations and Business Development.

Companies can also use the screens to entice viewers into their premises with on-the-spot promotions, such as offering free coffee with the purchase of a food item. “Customers come back out, sit in their car, and eat and drink what they just bought while they’re waiting for a charge,” Northrup says.

Digital promotions can be scheduled in advance. For example, a charging station operator can arrange to run a New Year’s special and have the content automatically return to normal the next day. Operators control content delivery remotely and can select content for multiple screens in different locations with the press of a button.

As the need for #ChargingStations grows, enhancing them with #DigitalSigns could provide the incentive operators need to fill the demand. @USSI_1985 via @insightdottech

Smart Digital Signage Increases Profitability

Charging can take 20 to 30 minutes or more, giving businesses plenty of time to display money-making ads to a captive audience. But the content doesn’t have to be all advertisements. USSI Global is working with broadcast networks to incorporate television programming, which could range from cooking and home improvement shows to live news and local sports coverage.

“Maybe in Georgia you’re playing a Georgia Bulldogs game, and in Alabama you’re showing the Crimson Tide,” Flynn says. The large screens can also be divided to simultaneously show programs and related ads, such as for team merchandise.

Over time, analytics will reveal trends about people who frequent the charging station and the surrounding area. That will enable companies to create even more effective content for their signs and adjust the menus or products in their adjacent businesses to better suit customers, boosting profitability.

The combination of advertising and increased business volume will help charging station operators recoup setup costs quickly and cover the expense of providing a charge, Northrup says: “The charging can be free because the revenue generated from the content you display offsets the cost.”

Free service is a competitive advantage that will draw more charging customers, who may also spend money at the business. With additional eyeballs viewing the ads, advertisers may also pay operators more to display them.

Getting Started with Charging Stations and Digital Signage Displays

For businesses that would like to deploy charging stations, USSI Global provides a total service model from product to permitting and installation to infrastructure. It also provides post-installation service, fixing problems such as a disrupted internet connection, a failed screen, or a kiosk that gets bumped by a vehicle.

In addition, the company collects and processes data from the digital signs and sends the information to charging station owners, who can use it to create content and settings, including adjusting the parameters for ad changes. While some companies produce their own content, others rely on third-party providers or work with USSI Global, which has partnerships with content providers.

A Cleaner, Brighter Future

As the need for charging stations grows, enhancing them with digital signs could provide the incentive operators need to fill the demand. “I think you’ll see more and more businesses with two or three of them in front of their place,” Northrup says.

And as AI becomes more sophisticated, it will lead to deeper and more valuable customer insights.

“AI started out giving answers to yes-or-no questions and now it measures demographics and mood. Capabilities will become greater over time, enabling more complex decisions about content triggering,” Northrup says. For charging stations with AI-enabled digital signs, that means one thing: “There’s nowhere to go but up.”

 

This article was edited by Georganne Benesch, Associate Content Director for insight.tech.

AI-Powered Retail Digital Signage Transforms Superstores

If you shop for groceries in a superstore, you know it can be overwhelming. Endless aisles and options to choose from. Do you make a beeline to the items you need or wander around looking for the best deals?

The retailers who operate these stores want to know how you shop. Running on paper-thin margins, they need to optimize their marketing strategies and practices to improve the bottom line. But traditional in-store methods—from taste testing, to flyers, to static signage—aren’t doing the trick. And the benefits of online shopping data analytics aren’t available in street-side retail.

That’s why innovative businesses are transforming their digital signage displays into smart retail solutions with the latest artificial intelligence and computer vision technologies.

AI-powered retail digital signage offers high-value information that store managers have not had in the past. This ranges from knowing which advertisements are the most eye-catching, where shoppers dwell, and which areas have the highest traffic flow. And perhaps most important is real-time data about customer demographics such as their age range and gender. All these factors allow content to be tailored on the spot while monitoring trends over time.

“With the help of computer vision and edge AI computing technology, retailers can review their marketing effectiveness with a bigger scope,” says Kim Huang, Sales Manager at NEXCOM, a global leader in IOT solution development. “They can evaluate return on investment, do revenue comparisons before and after a certain marketing campaign, and quickly optimize the advertisement accordingly.”

Edge AI Power in Action

One of the largest supermarket chains in Asia needed an economical way to understand customer behavior and implement more targeted marketing efforts. The company worked with NEXCOM, deploying its AI Precision Marketing solution with 2,000 digital display screens across 200 stores.

The solution makes it possible to measure anonymously how long a shopper engages with an advertisement, demographics, and what kinds of products held their interest or went into their shopping cart. Not only did the retailer increase sales 30% over one year, but they also gained a new revenue stream by selling brand advertising.

“With the help of #ComputerVision and #edge #AI computing technology, #retailers can review their #marketing effectiveness with a bigger scope.”—Kim Huang, Sales Manager, @NEXCOMUSA via @insightdottech

The client specifically required a stable, fanless system that could run video cameras 24/7. The heart of the platform is the AIEdge-X® 100, which drives the content for two back-to-back digital displays while simultaneously handling audience measurement via two independent cameras. In this case, the retailer has 10 screens in each store.

“The software integrated inside this hardware doesn’t just work as a digital signage player, there is also the audience measurement in the background,” says Huang. “All the data is processed at the edge, and then uploaded to the cloud server for generating different kinds of reports to help the business owner make better decisions.”

The AIEdge-X® 100 is powered by an Intel® Celeron® processor and the NEXCOM AIBooster-X2 deep-learning accelerator card, which includes two Intel® Movidius Myriad X VPUs. This processing power makes the simultaneous operation of two cameras possible. The edge gateway also includes the Intel® OpenVINO Toolkit, and third-party 3D software for anonymized facial recognition.

“To do the computer vision at the edge you need quite a high-power computer system,” says Huang. “In this case, the Celeron processor combined with our adapted Movidius VPU provided the performance required.”

What’s next for the company? With more than 1,000 grocery stores, the success of this project has the retailer planning to roll out the AI Precision Marketing solution in another 200 locations over the next year.

Smart Digital Signage Provides Data You Can Count On

Big data analytics also provides a wide range of other business benefits. Marketing efforts can be tailored to the time of day and the type of customers coming into the store. For example, you might run a hot pizza and cold beer promotion after 6 p.m. targeted to office workers. Or a special on fresh fish and bread right out of the oven in the afternoon for stay-at-home parents.

Analyzing the purchasing habits of customer profiles informs messaging and content design. And all this dynamic content can be pushed out from a central management control system—to all stores or just one.

And there’s another significant advantage that you might not expect from a smart digital signage solution. Continued AI-enabled data collection can improve supermarket operations and cost savings. Ongoing information on purchasing patterns informs supply and demand forecasts so the right products are on the right shelves at the right time.

New Opportunities for Smart Retail Systems Integrators

For systems integrators (SIs), the AI Precision Marketing Solution is up and running almost like an off-the-shelf product. Primarily the SI only needs to make sure the client has an internet-ready network connection.

“Upon arrival of the equipment, the integrator only needs to pop in the camera, connect the screens to AIEdge-X® 100, and connect the AIEdge-X® 100 to the VPN router,” Huang says. “All data sent from the edge to the cloud goes through a VPN channel for security. After that, they adjust the camera angle, and the system is ready to go right to work.”

SIs that may have deep experience in serving retail customers but lack AI development skills now have a solution that offers new opportunities with existing and new customers, Huang explains.

Smart Retail Has a Bright Future

The future of AI and vision in supermarkets seems almost endless. The more retailers know about their customers, the better they can serve them with exceptional shopping experiences. And when digital displays are interactive, information becomes a two-way street.

People want to know more about the food they purchase: where their produce was grown, healthy food options, price comparisons, and much more. The latest innovations in AI, CV, and digital-signage displays are making these use cases a reality today and into the future.

 

This article was edited by Christina Cardoza, Senior Editor of insight.tech.

Democratizing AI for All with Plainsight and Intel®

Elizabeth Spears & Bridget Martin

[podcast player]

When you think about AI, you don’t typically think about agriculture. But imagine how much easier farmers’ lives would be if they could use computer vision to track livestock or detect pests in their fields.

Just one problem: How can an enterprise leverage AI if they don’t already have a team of data scientists? This is a pressing question not only in agriculture but also in a wide range of industrial businesses, such as manufacturing and logistics. After all, data scientists are in short supply!

In this podcast, we explore how companies can deploy computer vision with their existing staff—no expensive hiring or extensive training required. We explain how to democratize AI so non-experts can use it, the possibilities that come from making AI more accessible, and unexpected ways AI transforms a range of industries.

Our Guests: Plainsight and Intel®

Our guests this episode are Elizabeth Spears, Co-Founder and Chief Product Officer for Plainsight, a machine learning lifecycle management provider for AIoT platforms, and Bridget Martin, Director of Industrial AI & Analytics of the Internet of Things Group at Intel®.

In her current role, Elizabeth works on innovating Plainsight’s end-to-end, no-code computer vision platform. She spends most of her time focusing on products offered by Plainsight, particularly thinking of what new products to build, what order to build them in, and why they are needed.

Bridget focuses on building up the knowledge and understanding that occur during the process of adopting AI, especially in an industrial space. Whether it is manufacturing or critical infrastructure, Bridget and her team at Intel® spend their time working to develop solutions that address the challenges of incorporating AI into an industrial ecosystem.

Podcast Topics

Elizabeth and Bridget answer our questions about:

  • (2:19) Plainsight’s rebranding and evolution from Sixgill
  • (7:32 ) The rapid evolution of AI and computer vision
  • (10:08) The unexpected use cases coming from advancements of AI
  • (13:33) How companies can help make AI more accessible
  • (16:07)The biggest challenges industries face when adopting AI
  • (18:31) How to get organizations to start thinking differently about AI
  • (21:30) The benefits of democratizing AI and computer vision
  • (23:50) How organizations can best get started with AI

Related Content

To learn more about the future of democratizing AI, read Build ML Models with a No-Code Platform. For the latest innovations from Plainsight, follow them on Twitter at @PlainsightAI and on LinkedIn at Plainsight.

 

Transcript edited by Christina Cardoza, Senior Editor for insight.tech.

 

Apple Podcasts  Spotify  Google Podcasts  

Transcript

Kenton Williston: Welcome to the IoT Chat, where we explore the trends that matter for consultants, systems integrators, and enterprises. I’m Kenton Williston, the Editor-in-Chief of insight.tech. Every episode we talk to leading experts about the latest developments in the Internet of Things. Today I’m discussing the democratization of AI with Elizabeth Spears, Co-Founder and Chief Product Officer at Plainsight, and Bridget Martin, Director of Industrial AI and Analytics of the Internet of Things Group at Intel®.

AI already has a solid track record in manufacturing. But, as the technology constantly advances, it’s turning up in all kinds of rough-and-ready use cases. For example, AI is now being used to count cows! But AI is useless if no one understands how to use it, right? And it’s not very often you find data scientists on a farm.

So, in this podcast I want to explore the possibilities for AI in all kinds of rugged use cases—not just in agriculture, but across the industrial sector. We’ll discuss the importance of making AI more accessible, and the new and exciting use cases that come from its democratization. But before we get started, let me introduce our guests. Elizabeth, I’ll start with you. Welcome to the show.

Elizabeth Spears: Hi, thank you for having me. I’m excited to chat with you today.

Kenton Williston: Likewise. And can you tell me about Plainsight, and your role there?

Elizabeth Spears: Sure. So, here at Plainsight we have an end-to-end, no-code computer vision platform. So, it allows both large and small organizations to go from data organization, to data annotation, to training a machine learning model or a computer vision model, and deploying. So, deploying it on-prem, on the edge, or almost anywhere in between, and then being able to monitor all of your computer vision deployment in a single pane of glass. My role is the Co-Founder and Chief Product Officer. So, basically everything around what we build, in what order, and why, is really where I spend most of my time—along with my amazing team.

Kenton Williston: I am really looking forward to hearing about all the details there. That sounds very, very interesting. And one thing I’m curious about upfront, though, is I had known your company as Sixgill, and I’m wondering why it’s been rebranded to Plainsight, and what that has to do with the company’s evolution.

Elizabeth Spears: Yeah, good question. So, like a true product-focused company, we listened to what our customers wanted and needed. And we basically took a transformational turn from an IoT platform to a computer vision platform. So, what we kept hearing from our customers was that they wanted more and more AI, and then, specifically, more computer vision. So we took this foundation that we had of a platform—an IoT platform that was used for high-throughput enterprise situations—and we made it specialized for both large and small companies to be able to build and manage their computer vision solutions, really 10x faster than most of the other available solutions out there. So, we’re talking about kind of implementing the same use case with even higher accuracy in sort of hours instead of months. And that’s really been our focus.

So, the name—the rebrand for the name Plainsight—really came from this “aha” moment that we have with our customers, where they often have thousands of hours of video or image data that’s really this untapped resource in the enterprise. And when we start talking to them about how the platform works, and all the big and small ways that data can provide value to them, they all of a sudden kind of get it. It’s almost like everything that I can see—if I sat there and watched it without blinking—all of that could actually just be identified and analyzed automatically. So they have this “aha” moment that we talk about as sort of the elephant in the room, which is—the elephant is our icon—where you start to understand how computer vision works, and you just can’t unsee all the places that it can be applied. So we’re bringing all of that value into Plainsight for our customers, and that’s where the name came from. Our icon, like I said, is that elephant that we’ve all really bonded to named Seymour, and he’s named that because he can “see more.” He can help see more in all that visual data.

Kenton Williston: Oh boy. So, I have to say, I have a well-earned reputation for being the dad-jokes guy, and I think I would fit right in.

Elizabeth Spears: Yeah. We were very pleased with that one internally.

Kenton Williston: Yeah. So, the evolution—that’s a really great story, and I think is reflective of where so much technology is going right now, and how central AI and computer vision in particular have become just everywhere. And I’m really excited to hear more from your perspective, as well as from Bridget’s perspective. So, Bridget, I’d like to say welcome to you as well.

Bridget Martin: Yeah. Thank you for having me. Super excited to be here.

Kenton Williston: So, tell me a little bit more about your role at Intel.

Bridget Martin: Well, so at Intel, obviously everybody really knows us for manufacturing chips, right? That is absolutely Intel’s bread and butter, but what I loved hearing Elizabeth talk about just now is the real need to be connected to and understand the ultimate consumers of these solutions, and ultimately of this technology. And so the main function of my team is really to have and build up that knowledge and understanding of the pain points that are occurring in the process of adopting AI technology in the industrial space—whether it’s manufacturing or critical infrastructure—and really working with the ecosystem to develop solutions that help address those pain points. Ultimately, in top of mind for me is really being around the complexity that it is to deploy these AI solutions. LikeElizabeth was saying, there’s such great opportunity for capabilities like computer vision in these spaces, but it’s still a really complex technology. And so, again, partnering with the ecosystem—whether it’s systems integrators or software vendors—to help deploy into the end-manufacturer space, so that they can ultimately take advantage of this exciting technology.

Kenton Williston: Yeah. And I want to come back to some of those pain points, because I think they’re really important. I think what both your organizations are doing is really valuable to solving those challenges. And I should also mention, before we get further into the conversation, that the insight.tech program as a whole and this IoT chat podcast are Intel publications. So that’s why we’ve gathered you here today. But, in any case, while those challenges are very much something I want to talk to you about, I think it’s worth doing some framing of where this is all going by talking about what’s happening with the applications. Because, like Elizabeth was just saying, we’re at a point where, if you can see it—just about anything you can see, especially in an industrial context—there’s something you can do with that data from an AI–computer vision point of view. And, Bridget, I’m interested in hearing what you are seeing in terms of new applications that you couldn’t do five years ago, a year ago, six months ago. Everything’s moving so fast. Where do things stand right now?

Bridget Martin: Yeah. Well, let’s kind of baseline in where we’re ultimately trying to go, right? Which is the concept of Industry 4.0, which is essentially this idea around being able to have flexible and autonomous manufacturing capabilities. And so, if we rewind five, ten years ago, you have some manufacturers that are what we would consider more mature manufacturing applications. And so those are scenarios where you already see some automated compute machines existing on the factory floor—which are going to be, again, automating processes but also, most critically when we’re talking about AI, outputting data—whether it’s the metadata of the sensors, or the processes that that automated tool is performing. But then you also have a significant portion of the world that is still doing a lot of manual manufacturing applications.

And so we really have to look at it from these two different perspectives. Where the more mature manufacturing applications that have some automation in pockets, or in individual processes within the manufacturing floor space—they’re really looking to take advantage of that data that’s already being generated. And this is where we’re seeing an increase in predictive maintenance-type applications and usages—where they’re wanting to be able to access that data and predict and avoid unplanned downtime for those automated tools. But then when we’re looking at those less mature markets, they’re wanting to skip some of these automation phases—going from an Industry 2.0 level and skipping right into Industry 3.0 and 4.0 through the use of leveraging computer vision, and enabling now their factory to start to have some of the same capabilities that we humans do, and where they’re, again, deploying these cameras to identify opportunities to improve their overall factory production and the workflow of the widgets going through the supply chain within their factory.

Kenton Williston: Yeah. I think that’s very, very true, everything you’ve said. And I think one of the things that’s been interesting to me is just seeing that it’s not just the proliferation of this technology, but it’s going into completely new applications. The use cases are just so much more varied now, right? It’s not just inspecting parts for defects, but, like Elizabeth was saying, basically anything that you could point a camera at, there’s something you can do with that data now. And so, Elizabeth, I’d love to hear some more examples of what you were seeing there. And is it really just the manufacturing space? Or is it a wider sphere of applications in the rugged industrial space where you’re seeing all kinds of new things crop up?

Elizabeth Spears: It’s really horizontal across industries. We see a lot of cases in a lot of different verticals, so I’ll go through some of the fun examples and then some of my favorites. So, one of the ones that is really cool, that’s sort of just possible, is super resolution—a method called super resolution. And one of the places it’s being used, or they’re researching using it, is for less radiation in CT scans. So, basically what this method does is, if you think of all of those FBI investigation movies, where they’re looking for a suspect and there’s some grainy image of a license plate or a person’s face, and the investigator says, “Enhance that image.” Right? And so then all of a sudden it’s made into this sharp image and they know who did the crime, or whatever it is. That technology absolutely did not exist most of the time that those types of things were being shown. And so now it really does. So that’s one cool one.

Another one is simulated environments for training. So, there’s cases where the data itself is hard to get, right? So, things like rare events, like car crashes. Or if you think about gun detection, you want your models around these things to be really accurate, but it’s hard to get data to train your models with. So just like in a video game, where you have a simulated environment, you can do the same thing to create data. And people like Tesla are using this for crash detection, like I mentioned, and we’re using it as well for projects internally. My favorite cases are just the really practical cases that give an organization quick wins around computer vision, and they can be small cases that provide really high value. So, one that we’ve worked on is just counting cattle accurately, and that represents tens of millions of dollars in savings for a company that we’re working with. And then there’s more in agriculture—where you can monitor pests. And so you can see if you have a pest situation in your fields and what you can do about it. Or even looking at bruising in your fruit—things like that. So, it’s really across industries, and there’s so much, well, low-hanging fruit, as we were talking about agriculture, where you can really build on quick wins in an organization.

Kenton Williston: It’s just all over the place, right? Anything that you can think of that might fall into that category of an industrial, rugged kind of application, there’s all kinds of interesting new use cases cropping up. And one of the things that I think is really noteworthy here is a lot of these emerging applications, like in the agricultural sector, are in places where you don’t traditionally think of there being organizations with data science teams or anything like that. Now, I will say a little aside here, that sometimes people think of a farming as being low tech, but really it’s not. People have been using a lot of technology in a lot of ways for a long time, but nonetheless, this is still an industry that’s not typically thought of as being a super high-tech industry, and certainly not one where you would expect to find data scientists. Which leads me to the question of how can organizations like this, first of all, realize that they have use cases for computer vision? And, second of all, actually do something to take advantage of those opportunities. So, Elizabeth, I’ll toss that over to you first.

Elizabeth Spears: Yeah. So this is kind of why we built the platform the way we did. First, hiring machine learning and data science talent is really difficult right now. And then, even if you do have those big teams, building out an end-to-end platform to be able to build these models, train them, monitor them, deploy them, and keep them up to date, and kind of the continuous training that many of these models require to stay accurate—it requires a lot of different types of engineers, right? You need the site-reliability guys. You need the big data guys. You need a big team there. So it’s a huge undertaking if you don’t have a tool for it. So that’s why we built this platform end-to-end, so that it would make it more accessible and simpler for organizations to just be able to adopt it. And, like I was saying, I feel like often we talk about AI as: the organization has to go through a huge AI transformation, and it has to be this gigantic investment, and time, and money. But what we find is that when you can implement solutions in weeks, you get these quick wins, and then that is really what starts to build value.

Kenton Williston: Yeah, that’s really interesting. And I think the general trend here is toward making the awareness of what computer vision can do for an organization so much more widespread, and getting people thinking about things differently. And then I think where a lot of folks are running into trouble is that, “Okay, we’ve got an idea. How do we actually do something with that?” And I think tools like Plainsight are a critical, critical part of that. But I know Intel’s also doing a lot of work to democratize AI. And, Bridget, I’d love to hear from your point of view what some of the biggest challenges are, and what Intel’s doing to address those challenges and make these capabilities more broadly available.

Bridget Martin: Yeah. I mean, like I was saying toward the beginning, complexity is absolutely the biggest barrier to adoption when we’re talking about AI in any sort of industrial application and scenario. And a lot of that is to some of the points that yourself and Elizabeth were making around the fact that data scientists are few and far between. They’re extremely expensive in most cases. And in order to really unleash the power of this technology, this concept of democratizing it and enabling those farmers themselves to be able to create these AI-training pipelines and models, and do that workflow that Elizabeth was describing as far as deploying them and retraining and keeping them up to date—that’s going to be the ultimate holy grail, I think, for this technology, and really puts it in that position where we’re going to start seeing some significant, world-changing capabilities here.

And so of course that’s, again, top of mind for me as we’re trying to enable this concept of Industry 4.0. And so Intel is doing a multitude of things in this space. Whether it’s through our efforts like Edge Insights for industrial, where we’re trying to help stitch together this end-to-end pipeline and really give that blueprint to the ecosystem of how they can create these solutions. Or it’s even down to configuration-deployment tools, where we’re trying to aid systems integrators on how they can more easily install a camera, determine what resolution that needs to be on, help fine-tune the lighting conditions—because these are all factors that greatly impact the training pipeline and the models that ultimately get produced. And so being able to enable deployment into those unique scenarios and lowering the complexity that it takes to deploy them—that’s ultimately what we’re trying to achieve.

Kenton Williston: Yeah, absolutely. One thing that strikes me here is that there is a bit of a shift in mindset that I think is required, right? So, what I’m thinking about here is that I think in large part—because of the complexity that has traditionally been associated with AI and computer vision, and when organizations are thinking about what they can do with their data—I think oftentimes there’s kind of a top-down, “let’s look for some big thing that we can attack, because this is going to require a lot of effort and a lot of investment for us to do anything with this technology.” And I think there are certainly going to be cases where that approach makes sense. But I think there are a lot of other cases, like we’ve been talking about, and you’ve got all these very niched, specialized scenarios, where really the way that makes sense to do it is to just solve these small, low-hanging fruit problems one at a time, and build up toward more of an organization-wide adoption of computer vision. So, Elizabeth, I’d like to hear how you’re approaching that with your customers—what kind of story, how you’re bringing them this kind of “aha” moment, and what gets them to think a little bit differently about how they can deploy this technology.

Elizabeth Spears: Yeah. And I want to take a second just to really agree with Bridget there on how challenging and interesting some of the on-the-ground, real-world things that come up with these deployments are, right? So, it’s like putting up those cameras and the lighting, like Bridget was saying, but then things come up—like all of a sudden there’s snow, and no one trained for snow. Or there’s flies, or kind of all of these things that will come up in the real world. So, anyway, that was just an aside of what makes these deployments fun and keeps you on your toes. It’s really about expanding AI through accessibility, for us. AI isn’t for the top five largest companies in the world, right? We want to make it accessible not just through simplified tools, but also simplified best practices, right? So, when you can bake some of those best practices into the platform itself, companies and different departments within companies have a lot more confidence using the technology. So, like you’re saying, we do a lot of education in our conversations, and we talk to a lot of different departments. So we’re not just talking to data scientists. We like to really dig into what our customers need, and then be able to talk through how the technology can be applied.

Kenton Williston: To me, a lot of what I’m hearing here is you’ve actually got a very different set of tools today, and it requires a different way of thinking about your operations. Because you’ve got all these new tools and because they’re available to such a wider array of use, there are a lot of different ways you can go after the business challenges that you’ve got. And, Bridget, this brings me to a question—kind of a big-picture question: what do you see as the benefits of democratizing AI and computer vision in this way, and making these sorts of capabilities available to folks who are expert in the areas of work, but not necessarily experts in machine learning and computer vision and all the rest?

Bridget Martin: Oh my goodness, it’s going to be huge. When we’re talking about what I would call a subject-matter expert, and really putting these tools in their hands to get us out of this cycle where it used to have to be, again—taking that quality-inspection use case—something that we can all kind of baseline on: you have a factory operator who would typically be sitting there manually inspecting each of the parts going through. And when you’re in the process of automating that type of scenario, that factory operator needs to be in constant communication with the data scientist who is developing the model so that that data scientist can ensure that the data that they’re using to train their model is labeled correctly. So now think if you’re able to take out multiple steps in that process, and you’re able to enable that factory operator or that subject-matter expert with the ability to label that data themselves—the ability to create a training pipeline themselves. These all sound like crazy ideas—enabling non–data scientists to have that function—but that’s exactly the kind of tooling that we need in order to actually properly democratize AI.

And we’re going to start to see use cases that myself or Elizabeth or the plethora of data scientists that are out there have never thought about before. Because when you start to put these tools in the hands of people and they start to think of new creative ways to apply those tools to build new things—this is what I was talking about earlier—this is when we’re really going to see a significant increase, and really an explosion of AI technologies, and the power that we’re going to be able to see from it.

Kenton Williston: Yeah. I agree. And it’s really exciting even just to see how far things have come. Like I said, you don’t have to go back very far—six months, a year—and things are really, really different already. I can barely even picture where things might go next. Just, everything is happening so fast, and it’s very, very exciting. But this does lead me to, I think, a big question. Which is, well, where do organizations get started, right? This is so fast moving that it can seem, I’m sure, overwhelming to a lot of organizations to even know where to begin their journey. So, Elizabeth, where do you recommend the company start?

Elizabeth Spears: Yeah. So, I mean, there are so many great resources out there on the internet now, and courses, and a lot of companies doing webinars and things like that. Here at Plainsight we have a whole learning section on our website, that has an events page. And so we do a lot of intro-to-computer-vision-type events, and it’s both for beginners, but also we have events for experts, so they can see how to use the platform and how they can speed up their process and have more reliable deployments. We really like being partners with our customers, right? So we research what they’re working on. We find other products that might apply as well. And we like kind of going hand in hand and really taking them from idea, all the way to a solution that’s production ready and really works for their organization.

Kenton Williston: That makes a lot of sense. And I know, Bridget, that was a lot of what you were talking about in terms of how Intel is working through its ecosystem. Sounds like there’s a lot of work you’re doing to enable your partners and, I imagine, even some of your end users and customers. Can you tell me a little bit more about the way that that looks in practice?

Bridget Martin: Yeah, absolutely. So, one of my favorite ways of approaching this sounds very similar to Elizabeth really partnering with that end customer—understanding what they’re ultimately trying to achieve, and then working your way backward through that. So, this is where we pull in our ecosystem partners to help fill those individual gaps between where the company is today and where they’re wanting to go. And this is one of the great things about AI—is what I like to call a bolt-on workload—where you’re not having to take down your entire manufacturing process in order to start dabbling or playing with AI. And it’s starting to discover the potential benefit that it can have for your company and your ultimate operations. It’s relatively uninvasive to deploy a camera and some lighting and point it at a tool or a process—versus having to bring down an entire tool and replace it with a brand new, very large piece of equipment. And so that really is going to be one of the best ways to get started. And we of course have all kinds of ecosystem partners and players that we can recommend to those end customers, who really specialize in the different areas that they’re either wanting to get to or that they’re experiencing some pain points in.

Kenton Williston: So you’re raising a number of really interesting points here. One is, I love this idea of the additive workload, and very much agree with that, right? I think that’s one of the things that makes this whole field of AI—but particularly computer vision—so incredibly powerful. And the other thing that I think is really interesting about all of this is because there are so many point use cases where you can easily add value by just inserting a camera and some lighting somewhere into whatever process you’re doing, I think it makes this a sort of uniquely easy opportunity to do sort of proofs of concept—demonstrate the value, even on a fairly limited use case, and then scale up. But this does lead me to a question about that scaling, right? While it’s great to solve a bunch of little point use cases, at some point you’re going to want to tie things together, level things up. And so I’d be interested in hearing, Elizabeth, how Plainsight views this scaling problem. And I’m also going to be interested in hearing about how Intel technology impacts the scalability of these solutions.

Elizabeth Spears: We’re looking at scale from the start, because, really, the customers that we started with have big use cases with a lot of data. And then the other way that you can look at scale is spreading it through the organization. And I think that really comes back to educating more people in the organization that they can really do this, right? Especially in things like agriculture—someone who’s in charge of a specific field or site or something like that may or may not know all the places that they can use computer vision. And so what we’ve done a lot of is we’ll talk to specific departments within a company. And then they say, “Oh, I have a colleague in this other department that has another problem. Would it work for that?” And then it kind of spreads that way, and we can talk through how those things work. So I think there’s a lot of education in getting this to scale for organizations.

Kenton Williston: And how is Intel technology, and your relationship with Intel more broadly, helping you bring all these solutions to all these different applications?

Elizabeth Spears: They’re really amazing with their partners, and bringing their partners together to give enterprises really great solutions. And not only with their hardware—but definitely their hardware is one of the places that we utilize them, because we’re just a software solution, right? And so we really need those partners to be able to provide the rest of the full package, to be able to get a customer to their complete solution.

Kenton Williston: Makes sense. We’re getting close to the end of our time together, so I want to spend a little bit of time here just kind of looking forward and thinking about where things are going to go from here. Bridget, where do you see some of the most exciting opportunities emerging for computer vision?

Bridget Martin: Elizabeth was just touching on this at the end, and when we’re talking about this concept of scalability, it’s not just scaling to different use cases, but we also need to be enabling the ability to scale to different hardware. There’s no realistic scenario where there is just one type of compute device in a particular scenario. It’s always going to be heterogeneous. And so this concept—and one of the big initiatives that Intel is driving around oneAPI and “Write once. Deploy anywhere”—I think is going to be extremely influential and help really transform the different industries that are going to be leveraging AI. But then, also, I think what’s really exciting coming down the line is this move, again, more toward democratization of AI, and enabling that subject-matter expert with either low-code or no-code tooling—really enabling people who don’t necessarily have a PhD or specialized education in AI or machine learning to still take advantage of that technology.

Kenton Williston: Yeah, absolutely. So, Elizabeth, what kind of last thoughts would you like to leave with our audience about the present and future of machine vision, and how they should be thinking about it differently?

Elizabeth Spears: I think I’m going to agree with Bridget here, and then add a little bit. I think it’s really about getting accessible tools into the hands of subject-matter experts and the end users, making it really simple to implement solutions quickly, and then being able to expand on that. And so, again, I think it’s less about really big AI transformations, and more about identifying all of these smaller use cases or building blocks that you can start doing really quickly, and over time make a really big difference in a business.

Kenton Williston: Fabulous. Well, I look forward very much to seeing how this all evolves. And with that, I just want to say, thank you, Elizabeth, for joining us today.

Elizabeth Spears: Yeah. Thank you so much for having me.

Kenton Williston: And Bridget, you as well. Really appreciate your time.

Bridget Martin: Of course. Pleasure to be here.

Kenton Williston: And thanks to our listeners for joining us. To keep up with the latest from Plainsight, follow them on Twitter at @PlainsightAI, and on LinkedIn at Plainsight.

If you enjoyed listening, please support us by subscribing and rating us on your favorite podcast app. This has been the IoT Chat. We’ll be back next time with more ideas from industry leaders at the forefront of IoT design.

Telemedicine Gets a Checkup from ViTel Net

As we learn to live in a COVID-inflected world, it’s clear that the way we experience healthcare has changed forever. Though in-person visits to the doctor, clinic, or hospital are once more a possibility, telemedicine isn’t going anywhere. But are patients, providers, and—perhaps most crucially—healthcare systems ready for this new reality?

Dr. Richard Bakalar, Chief Strategy Officer at ViTel Net, a provider of scalable virtual care solutions, has an impressive history with telemedicine. It was garnered during his experiences traveling internationally with the White House, caring for those affected by domestic natural disasters, and leading the Navy’s transition to telemedicine. He’ll talk with us about the lessons learned from pandemic telemedicine—and its challenges—and how the whole healthcare landscape can benefit going forward.

What is the value of telemedicine from both a patient and provider perspective?

Convenience is one big advantage, but a more important factor, I think, is getting the right data at the right time. One of the challenges with face-to-face care is that there is often a lag between when a patient requests care or is scheduled for care, and when a patient has the problem. In the telehealth sphere you can synchronize those times.

You can also provide context. The patient may be at home during the visit, and you may be able to see something in the background of the video, for example, that may show a compromised environment. That’s the sort of information that may not be available to a physician when the patient is seen in a clinic or hospital environment.

“The information generated by #VirtualVisits is going to be more and more critical to getting accurate analysis not only of #patients but of population #health.”–Dr. Richard Bakalar, CSO @Vitelnet via @insightdottech

So, more context, more timeliness, more convenience, even the ability to have more frequent evaluations—it all offers a lot of flexibility for optimizing the care schedule as well as the care environment. Sometimes face-to-face is superior when there’s a physical context required. But sometimes, like when timing is sensitive, then a virtual visit may be a better option.

What lessons have you learned throughout your long career with telemedicine?

When I migrated from the military into the private sector, I had the privilege of being the president of the American Telemedicine Association. And what we learned there is that a lot of organizations had telemedicine projects that were departmentally focused. And each of those projects created an independent proof of concept around how telemedicine could impact their care. What I learned early on is that you need more of a programmatic approach.

If you think about radiology, you don’t have a separate radiology division within each medical specialty: You have one radiology department that supports the entire continuum of care within the health system. Telemedicine could leverage that kind of a model, where we could take advantage of what’s available—from a protocol perspective, from a business perspective, even a technology-infrastructure perspective—and just change those minor things that need to be changed to adopt specialty modules on a single platform. And it doesn’t even have to be a telemedicine program—it could be an innovation program where telemedicine is one of the early use cases.

One of the lessons I learned early on was that governance needs to be centralized, technology needs to be centralized, and leadership needs to be top down to provide strategic support for the program—from a technical, administrative, and clinical perspective. But the innovation actually comes from the bottom up, from the end users in the field—in a hospital at the bedside, for instance. Innovation brought up from the bottom, and support coming from the top. And when you have that kind of multidisciplinary approach to governance, telemedicine can scale very nicely and can be very effective.

What is the challenge of implementing ad hoc telemedicine solutions?

It’s the challenge of using what I call an “app store approach” to telemedicine—where you have lots of different single applications that are not necessarily linked together. Data doesn’t flow between them, the workflow is not totally integrated, and the reporting is not necessarily normalized across those different applications.

But workflow and reporting need to be integrated. So having a platform with modules allows you to do that—with the reporting as well as the data capture. It also links back to the systems of records—such as the electronic health record, the PAC systems for images, and the business and related financial systems. That all needs to be in concert in order to provide the telemedicine service.

Why has the integration of telemedicine been so difficult in the healthcare industry?

There is a fragmented approach in the private sector. Each individual department has a separate project officer or a separate technology, and the data is all siloed. There’s also no business model yet for telemedicine in the healthcare industry, because reimbursement has traditionally been very limited.

So the challenge is to transform the governance, the technology infrastructure, the business-reimbursement models, the regulatory barriers that have been up for the past 10 or 15 years. Also to get adoption and acceptance by the patients, and—more important—by providers. Providers have been hesitant to adopt this capability because, before COVID, they were very busy with face-to-face care. With the arrival of COVID, they had to use the technology to be able to access their patients, and so they recognized the value of it.

Post-COVID, the issue is going to be that there are more patients than physicians have availability for. We still have problems with general access to healthcare, as well. So the question is how can limited resources—physician resources, ancillary health resources, other staff resources—be better utilized to provide better care to more people, more equitably, around the health system.

But I think there’s reason for optimism, because patients have seen the value. They use videoconferencing for work; they use video for entertainment. And so they say, “Why can’t I use it for my healthcare services?” So patients are going to demand better access to telehealth services going forward. And health systems are going to recognize that they’re understaffed in a lot of cases, and telehealth can be more efficient.

Then the payers have seen that telemedicine can actually save money for them in the long run—especially when it’s used for chronic conditions, or for high-cost services in the hospital health system. Episodes of care can be less expensive, even if individual encounters may be more expensive until the infrastructure has been scaled.

The key is that if you have multiple apps, it’s very expensive to maintain those interfaces. But if you have one platform that has multiple modules, maintaining that interface with the electronic health record and the data warehouses and the financial systems is much easier. That’s one of the things that organizations are going to have to make some investments in going forward.

How is ViTel Net helping to streamline and unify the electronic health record system?

There’s a lot of demand for organizations to modify electronic health record systems to support changing payment requirements and regulations. But, in the past, telemedicine has taken a backseat there. That’s been changing over the past year and a half, but it doesn’t change the fact that EHRs were primarily designed to be transactional systems; they were not designed to be customizable, configurable workflow engines—engines that can meet the demands of a remote visit.

What ViTel Net brings into play is agility. We can make very rapid changes in our platform, and then share the critical components with the transactional system. This happens both at the front and back ends—pulling in demographic and historical information, and then putting the summarized results of an encounter back into the electronic health record at the end of a transaction. This provides that continuity of care that’s needed in both face-to-face and virtual care. We help with the virtual visits, and provide videoconferencing and language processing—details that electronic health records are not suited to do, but that are required for virtual visits.

Is there a role for technologies like language processing in the telehealth domain?

AI technologies are important at even the most rudimentary level of language processing, particularly when you start having outreach to more diverse populations. Not everyone has English as a first language, and patients and their family members, as well as extended health networks, need to be able to communicate with the health system more effectively. So one of the things we’ve incorporated into the telehealth platform is language services, in video as well as audio, and in multiple languages.

One of the challenges is that all the information generated by the virtual visits of the past several years is missing from data warehouses, so you’re missing the opportunity to take advantage of it. Now, why isn’t that information in the data warehouse? Because most of those transactional systems—the electronic health records—don’t code for telehealth. In the past, it was a very small fraction of their business, kind of a rounding error, so to speak, of their business. And virtual visits were typically single events rather than continuity-care events, so it wasn’t a problem.

But now, as we move into delivering chronic care, the information generated by virtual visits is going to be more and more critical to getting accurate analysis not only of patients but of population health. And so the ability to code things properly, to be able to include them in the data warehouses, and to have a more comprehensive view of patients is going to be more critical going forward. And more accurate machine learning and artificial intelligence will be crucial to that.

There’s a great opportunity to use some of these new technologies, where the entertainment, retail, and financial industries have already done the heavy lifting, and we can leverage their experience with those capabilities in healthcare.

How can healthcare organizations set themselves up for success?

The good news is that telehealth is already on the third wave down this path of digital transformation. It started with PACS in the early 1990s, and then the electronic health record, and now telehealth platforms. One of the things that was learned with the first two waves is that you want to partner with an organization that’s going to co-invest. Are they going to share risks? Are they going to be reliable? Are they going to be innovative? And probably most important, are they going to provide the kind of support you need—not only for the initial implementation but also for the ongoing innovation, training, and support that’s going to be necessary to make that investment a value going forward.

I always like to ask the Why: “Why are you doing it?” Not so much the How. The How is actually very easy today, because technology is abundant and robust. Senior leadership needs to define the objectives, the goals—the Why of using telemedicine for their organization at that particular time. And then, how do they want to leverage it going forward? So that’s step number one, that governance piece.

The second step is to assemble a multidisciplinary team, so that you have the representation of not only the technologists but also the operational folks who have to fund and support the project from an investment and business-model perspective. And then the clinicians need to be on board, so that they can tell you what’s practical, and what’s needed, and where the pain points are.

And I always recognize that telemedicine is not a technology; it’s a service. That’s an important concept that organizations need to think about as they grow their programs. All the capabilities that you need for face-to-face care need to be available in the telemedicine sphere as well.

Related Content

To learn more about the future of telehealth, listen to our podcast Virtual and In-Person Care Come Together with ViTel Net and read Telehealth Is the Future of Care, and the Future Is Now. For the latest innovations from ViTel Net, follow them on Twitter at @ViTelNet and on LinkedIn at ViTel Net.

 

This article was edited by Christina Cardoza, Senior Editor for insight.tech.

Edge AI, Powerful Compute Cut Supply Chain Gridlock

From toilet paper shortages to skyrocketing lumber costs, COVID-19 exposed supply chain weaknesses almost immediately. These disruptions have caused cascading issues across industrial supply chains ranging from product delays and abrupt price increases to an inability to conduct business in certain sectors.

Logistics companies usually keep backups from expanding, but with global shutdowns preventing raw materials from being extracted and goods from being manufactured, there has been little they could do. However, one thing these organizations can control as we attempt to return to pre-COVID inventory levels is the efficiency with which goods are transported from loading dock to retail warehouses.

For example, rather than moving materials as soon as they are available, supply chain digital transformation could cut costs and balance inventory by only sending shipments once transport vehicles have reached 100% capacity.

This is easier said than done because it means someone must constantly monitor shipping containers and delivery trucks for available space, then communicate any vacancies to transport and operations managers. But now by combining computer vision AI with supply chain management, that people resources can be bolstered by IoT tech.

Digitized Supply Chain Management Yields Efficiencies

At ports around the world, transport trucks enter through gates where Port Authority personnel record the origin, destination, and serial number of shipping containers for tracking purposes.

To eliminate traffic jams and the potential for human error in this process, many ports are installing optical character recognition (OCR) systems at the gates that automate container check-in. But these computer vision-based systems are capable of much more.

In checkpoint-based operations like this, AI trained to detect free space is a game changer. The vision AI uses existing cameras to identify reference points in images, determine the space utilization of a given container, then report those findings to logistics managers. When combined serial number tracking operators can quickly pinpoint available capacity in their fleet.

Going a step further, computer vision systems can also be used to identify the volumetric properties of goods and assist in pallet dimensioning. As their names imply, these solutions measure the physical properties of goods and packages as they progress through the manufacturing and distribution chain.

Whereas these systems once required specialized scanners, modern AI can detect item length, width, height, and other physical characteristics with standard cameras to lower implementation costs and simplify integration. But the real logistical power here lies in combining this type of data with transport capacity information so the maximum merchandise can be packed into shipping containers.

Logistics Operators’ Eye at the Edge

The above provides just two examples of how AI-enabled computer vision can maximize the efficiency of logistics operations. But there are many more use cases that leverage the technology including dock occupancy monitoring, wrong place detection, and automated robots that handle, load, and unload freight.

None of these applications are possible without edge computing that can execute advanced AI algorithms in real time. This has been a real challenge due to performance, power, and cost limitations in existing solutions. Avnet Embedded—a leader in embedded compute and software solutions—is making advanced edge AI a reality with its MSC C6C-TLU, based on 11th generation Intel® Core processors.

The MSC C6C-TLU is a COM Express Type 6 module designed to withstand the environmental rigors of deployment in transportation and other environments while also supporting the performance demands of edge AI use cases. These abilities are rooted in the onboard 11th generation Intel® Core i3, i5, or i7 processors, which contain two or four cores and either Intel® Iris® Xe or Intel® UHD Graphics with up to 96 execution units.

When paired with optimizations from the Intel® OpenVINO toolkit, COM Express module is extremely efficient at crunching numbers in AI vision applications. However, this level of performance can be a detriment to edge systems because it implies high power consumption and excess heat generation that could damage electronic components.

#SupplyChain #DigitalTransformation could cut costs and balance inventory by only sending shipments once transport vehicles have reached 100% capacity. @Avnet via @insightdottech

Game Changing Processor Platforms

Certain models of the host Intel® Core processors are designed to resolve these challenges.

“What is really a game changer in 11th gen Intel processors over previous generations is definitely the support for extended temperatures and 24/7 operating modes,” says Christian Engels, Product Marketing Manager at Avnet Embedded. “You can perform heavy duty applications on the CPUs for a long time, which lets you run these workloads in extreme environment conditions.”

Being part of the COM Express family of standards, the MSC C6C-TLU needs a companion carrier board that links the module to the larger computer vision system via application-specific I/O. Once built, this carrier board can support processor modules with the same interfaces for years to come.

Avnet Embedded is well-versed in designing and manufacturing carrier cards but can also integrate complete standards-based computer vision systems that give logistics managers their own intelligent eye at the edge.

AI Supply Chain Management Never Sleeps

The complexity of today’s global supply chains has made recovering from COVID-19 shutdowns an equally complex challenge that requires different solutions.

For instance, distributors are moving from just-in-time inventory models back to stockpiling merchandise as insurance against supply fluctuations. At the ports of Los Angeles and Long Beach, authorities are enlisting the expertise of logistics powerhouses like Walmart and Target to expand overnight operations until shipping container backlogs are cleared.

These more fluid, higher uptime logistics operations will require support from tools that are intelligent, reliable, and able to identify supply chain opportunities more quickly and efficiently than humans.

Lucky for us, AI-driven logistics never sleeps.

 

This article was edited by Georganne Benesch, Associate Content Director for insight.tech.

When Real-Time Data Meets AI at the Edge

Manufacturers are finding AI is no longer the answer to automating operations and improving product quality. It’s only half the answer. While AI can increase the defect detection rate by up to 90% over human inspection, it’s useless if manufacturers cannot obtain the information they need when they need it. Without a faster process, they continue to run the risk of unplanned shutdowns and production errors.

“The challenge manufacturers have with AI is actually validating and verifying the return of investment,” says Shunichi Kagaya, Senior Engineer at Hitachi, a leader in digital and IoT solutions. “Manufacturers understand the value of data and they want to use it. But they don’t understand how.”

The Transformation of Industrial AI

AI is already transforming much of the manufacturing industry, but there are still plenty of missed opportunities.

While AI is used to ensure the availability and reliability of equipment, manufacturers are finding they don’t always get insight into the health or status of their machines fast enough.

The data collected from these machines is typically sent to the cloud for analysis, which can delay the results until it is too late. The cloud also does not always provide the security or high-speed and low-latency responses necessary to make actionable decisions.

While #AI can increase the defect detection rate by up to 90% over human inspection, it’s useless if #manufacturers cannot obtain the information they need when they need it. @HitachiGlobal via @insightdottech

Legacy machines can also make it more difficult to obtain data because of their incompatible protocols and siloed systems. Manufacturers traditionally must go through complex preprocessing or data cleansing to even make sense of the information. Again, this delays the ability to take immediate action. This results in production and shipment delays, and even defective products.

“While the customer may understand the need to get the data, they are constrained. Any changes they do make cannot impact the existing production schedule,” says Kagaya. “There is often a tradeoff to be made between the data that is collected, the frequency, and accuracy. You really have to balance all these activities.”

It doesn’t have to be this way.

The Evolution into Edge AI

In conjunction with Intel®, Hitachi has created the Hitachi Industrial Edge Computer CE series Embedded AI model with a built-in image analysis execution platform that leverages Intel’s AI and deep learning technology.

Taking advantage of the Intel® OpenVINO Toolkit, the platform can perform image analysis directly on shop floor equipment to quickly alert workers of any product defects or faults. It can monitor multiple production lines and devices simultaneously with remote monitoring capabilities.

In addition to edge processing, data that may not be critical for immediate analysis can be sent to the cloud for further insights (Figure 1).

Flow chart showing the impact of Hitachi’s embedded AI models
Figure 1. Hitachi embedded AI models optimize the entire production. (Source: Intel®)

“Whether it is ERP or another system, the solution connects the devices, retrieves and formats the data, and then uploads it as valuable information,” says Kagaya. “If we need more processing power and data handling capacity, then we will connect this to the cloud where other AI models can take care of that combined data and execute it. The customer really needs to understand what type of challenge they want to solve and the architecture to achieve it.”

System integrators (SIs) can develop and install containerized applications to extend and add new features or functionality. From a hardware and network point of view, the Industrial Edge Computer CE series can connect to a variety of different equipment to gather data. SIs can customize the solution depending on the use case they are looking to solve.

According to Kagaya, SIs need to have a basic understanding of network protocols as well as familiarity with Python or C++ programming languages to successfully develop their own AI models on top of the Hitachi solution.

At any time, if a manufacturer decides to invest, upgrade, or renew equipment, Hitachi helps customers ensure that the AI model is not affected. “We will work with the customer to really ensure that the model is kept intact, and if there are necessary tweaks required to work with the new data set, we will help them with that as well,” says Kagaya.

Going forward, Hitachi is working on incorporating wireless technologies such as 5G or Wi-Fi 6 into the next series to perform more processing at the edge level. Kagaya says, “Right now, if you send the data to the cloud, have the AI model do the inference, and come back to take the action, that’s not really serving the needs of the equipment in the moment.”

The real challenge in getting this space to take off, according to Kagaya, will be measuring the accuracy of the AI models. He explains that manufacturers typically do not have the patience and expect to see the impact of AI models immediately. But if they can wait to see the results and benefits of this technology, it will be game-changing for the industry.

“The AI models actually provide more accuracy after you start using them more. As the models work on the data continuously, it automatically gets fine-tuned and provides even more accurate results,” he says.

 

This article was edited by Georganne Benesch, Associate Content Director for insight.tech.

Getting the Big Picture on Video Technology

We’ve become used to seeing video surveillance technology in retail settings, attached to traffic lights, and even trained on urban sidewalks. But the opportunity space for video is going beyond just protecting property and issuing fines. Video can make people’s lives better in a tangible way—from easing traffic congestion to helping the elderly.

We talk to Thomas Jensen, CEO of Milestone Systems, a leader in video management software, about exploring the use cases beyond property surveillance. He’ll discuss the importance of working with the right kind of partner company, as well as the future for predictive video, and taking a human-first approach to video technology.

Where does Milestone land in the video-technology space?

Milestone produces video-management systems—which is a way of providing data-driven solutions based on video technology—within the security industry, but also beyond the security industry. What we provide to our customers is video data, and the ability to see video data that people may not be able to capture in the moment with their eyes.

Our product can deliver insights into the past by pulling from historical video data. It can provide real-time data by watching live video. And—with the utilization of all the new technological advancements we have—in the future we will be able to provide predictions based on historical video data.

When you look at it from a citizen’s perspective, nobody really likes video surveillance. We provide responsible video technologies, so that we as citizens and as users of video systems can feel comfortable with the technology. We take pride in acting responsibly—of course, in regard to corporate governance. But also, when it comes to the utilization of technology, and video technology in particular. Because as a technology company maneuvering in a field where there are a lot of new advancements, it’s important for us that we always put humankind ahead of what we do and take responsibility for what we develop.

We are a company with a very strong culture and foundation of focusing on people first—when we look at our colleagues, when we look at our partner communities, as well as when we look at our customers.

I think technology companies hold a great responsibility for the future, and we need to live up to the trust that our customers and our societies vest in us in how we produce and use technology. And, as an industry, we are not always perceived as being willing to do that.

How are the systems integrators you work with responding to these values?

We, as an industry, have a challenge. We very often fall in love with our own products and solutions. And we have that perception that our product is—if not God’s gift to mankind, then at least it is our gift to our customers. Whereas our customers are really looking at: “What does that product do for me? What value does it create in my business?”

I think it’s important that we keep putting our customers and our value creation in front of what we do. I’ve introduced something called Business Outcomes at Milestone—every time we develop a new product, a new feature, or bring something to market—we need to understand what outcome it will bring to our customers. We encourage our partners, including the systems integrators and our technology partners at large, to have that same approach.

What are some emerging business outcomes we can expect from this space?

Today we are selling video solutions primarily for safety and security. But we could start looking at what we can put on top of the security part. For example, you could monitor traffic patterns. Traditionally, you would only use video cameras on streets to either look at speed control or red-light violations. But tomorrow we would be combining safety and security with traffic-management systems and analytics. We could use video data intelligently to redirect the traffic onto alternate roads to avoid traffic jams, and to thereby also avoid the pollution that typically happens when you have a lot of vehicles idling on the road. That would also increase productivity for society.

Instead of just selling the various elements that provide safety and security, we should be educating ourselves on what it is that actually makes a difference for our customers. So you can start seeing use cases or business outcomes that are not just about managing traffic speed or issuing fines, but actually optimizing how you could use technology to improve the greater good for our customers and our citizens.

What do systems integrators need to do to get to these business outcomes?

At Milestone we offer our customers access to our technology, to our stack, and to our experts across the board. We have a number of technology partnerships that work closely with our systems integrators. We also have a close partnership with Intel to help bring more of these elements to market on an ongoing basis.

That doesn’t mean that the systems integrators shouldn’t understand the technology they’re selling, but they should really first and foremost understand the customers, and the value that they bring to the customers. For me, it’s almost a swapping around of the traditional view of selling products and implementing products, and instead looking at how we can demonstrate the capability of the solution.

Can you talk about how you view the partnerships you have?

We have decided that we will be experts in our field of technology—data-driven video technology. And we want to work with the right partners that want to bring that vision to life—both in terms of the technology side, but also in terms of bringing that value to our joint customers. It’s also about how we can ensure that, collectively, we provide the best end turn solutions, rather than believing it’s a one-man show.

So we have two types of partners. We have our technology partners, with whom we integrate our solutions through our open-platform technology—with APIs, with drivers, and so forth. And then we have the partners that are actually creating and bringing that value to life for our customers—our systems integrators, our resellers, and so forth.

“We honestly and genuinely believe that #technology—and #video technology in particular—should serve humanity, not the other way around.” —Thomas Jensen, CEO of @milestonesys via @insightdottech

With partner companies, we obviously look at their capabilities—in terms of technology, in terms of vision, and in terms of commercial capabilities. But more importantly, it’s becoming increasingly visible to us that the true partners for us are the ones that can visualize the business outcomes or the value creation, over mere products.

We increasingly require our partners to act responsibly—in how they produce, in how they sell and integrate, and in how they use the technology stacks that we offer to our customers. We believe that we have a responsibility to really create technology that benefits both our customers and the societies we are part of. So, those four areas—the capabilities, the ability to do business outcomes, win-win partnerships, and responsibility—are really the core of our partner selection.

How do you work with partners like Intel® to bring solutions to market?

When we select partners, and core strategic partners like Intel®, we look at what capabilities are at hand to support new product development in new technology areas. And these may be areas that are underutilized today, or areas where we can actually create that value in front of the customers.

So we have continuous briefings and exchanges between the Intel team and the Milestone team. Our teams discuss very closely how we can continue to develop our platform utilizing Intel technology, but also ensuring that smooth interlink between the technologies that makes it easier for the systems integrators to really accelerate our business together.

How have you been working with partners to expand the uses and capabilities of video systems?

We have partnered with an American city on their traffic situation. One of the things that they realized early on is that it’s very hard to intelligently predict traffic patterns. With our technology—with the cameras mounted on the streets—they started being able to address this and to access the traffic patterns. They looped it in to the time stamps, and started analyzing all the elements that could be seen from a day in the life of the city—such as understanding that there’s a huge difference in what the traffic movements are. For instance, how much traffic is going eastbound-westbound versus north-south during the morning hours and during the afternoon hours.

And what they were able to do was to reprogram all of the city traffic lights to follow the traffic patterns. If people are mostly approaching the city from the south and the west in the morning, then the city has the ability to keep the green lights open for longer for people driving in those directions during the morning hours, and reverse it in the afternoon hours.

The outcome for the city is, of course, that rush hour peaks become shorter—we are minimizing the time that our citizens are spending on the road. We are also increasing productivity in the society for the same reason. And, on top of that, especially in these environmentally conscious days, idle traffic generates pollution. So when we can reduce the amount of traffic, we can also contribute to reducing the pollution from that traffic. So, all in all, those are several great outcomes that video has never previously been part of solving.

What are some other areas where you see video technology showing up in new ways?

Let’s take a look at healthcare—like providing doctors the opportunity to provide virtual consultations, which has been very important during the COVID-19 pandemic. There is also a lot of discussion about how video can be used in homes for the elderly. I’m sure we can all agree that none of us would like to have a camera pointed at our elderly citizens 24/7. However, by using our software together with heat sensor technology, without necessarily having video enabled you can have full detection.

So if an elderly person tripped on the floor of their apartment, you will be able to see—with a heat signal—is it just somebody tying a shoelace, or is it actually somebody that had a heart attack? And in this case, two minutes can really matter. Nobody would sit and look at a video of our elderly citizens, but it would trigger an automatic alarm based on movement sensors and heat sensors that would save lives, basically.

Take retail as another example. Not just theft and burglary and so forth—it’s also about optimizing customer flow. How do we improve the customer experience? We can actually guide customers through the shop, and also open more cashiers when we can see that people are gathering in the aisle and wanting to exit and pay. So there are a lot of elements there that are beyond just security. It’s really a matter of how we apply that technology, and thinking about what it is that really makes life easier or better or more prosperous for our customers.

What is your big-picture vision of where the future of video technology is heading?

Our aspiration at Milestone is really to support societies to make the world see. And by utilizing new technologies—like artificial intelligence, machine learning, sensor technology, and so forth—to actually use historic and real-time data to predict what will happen in the future.

Imagine that video technology—going back to my traffic example—can actually see, based on traffic patterns of what is happening in real time, that there is an accident that is bound to happen within the next five minutes. Imagine that it can thereby activate red-light signals so we can avoid this thing happening. That’s what future technology and video technology can really bring forward.

And, for us, we honestly and genuinely believe that technology—and video technology in particular—should serve humanity, not the other way around. Of course we like to turn a profit. That’s why we exist. But we actually believe that acting responsibly and putting people first in what we do—that’s really good for business. So, for us, the future of visual technology is data-driven application of our technology, of our platforms, in a responsible way.

We have to continue challenging the status quo. And one of the ways we’ve done it is by saying: “We develop a product, but we are selling a solution.” We know that we can’t know it all, but we know that the technology is moving so fast that if we look to the right partners, we will ride that wave.

Related Content

To learn more about the future of video surveillance technology, listen to the podcast Human-First Video Surveillance with Milestone, and read Safety and Security Trends: How SIs Succeed. For the latest innovations from Milestone Systems, follow them on Twitter at @milestonesys and on LinkedIn at Milestone Systems.

 

This transcript was edited by Christina Cardoza, Senior Editor for insight.tech.

Immersive Digital Signage Video Content Boosts Engagement

Say goodbye to boring signage. Attention-grabbing, knock-your-socks-off immersive digital signage video content is elbowing its more staid static cousins out of the way. It’s also increasing revenue while enhancing customer engagement.

Colin Farquhar, Global VP of Sales of IPTV solutions for Exterity, a VITEC company that offers IP video, guest experience, and digital signage technology, can attest to this development.

When he recently took in a Golden State Warriors basketball game in San Francisco, Farquhar was wowed by the immersive video signage at Chase Center: “I’ve not been in an environment like that where the use of video signage has been so impressive and well used to support the overall operation of a facility.”

Farquhar’s immersive experience is what venue operators are betting on to boost fan engagement and drive revenue. Chase Center opened in 2019, boasting 9,699 square feet of video displays and the largest center-hung board display in an indoor arena.

“Video signage is increasingly pervasive and has evolved with a marked improvement in image quality, resolution, and a dynamo of a platform that supports it all,” Farquhar says. He is especially struck by the seamlessness of the video delivery across multiple screens—additional boards synchronize with the center display—and the smooth transitions between types of content.

The Content Challenges

All that seamlessness takes a lot of elves in the workshop.

Making it look easy is hard. For one thing, security is a significant challenge. “Delivering content in a way that ensures its integrity from the point of origin to the point of display is key,” Farquhar says. The challenge is especially important as video signage drives eyeballs in a variety of industries like retail, hospitality, hospitals, and offices, each of which might have different security protocols. When stakeholders play pass the baton in the content delivery relay race, too often security gets overlooked, Farquhar says.

Variability of network infrastructure is another headache, as streaming must contend with other services that hog bandwidth. “You can’t always be guaranteed that you’ll have the very best network available. If I’m running high definition 4K video content with color integrity reproduction, we’re looking at very high levels of bandwidth requirements,” Farquhar says.

Management of content delivery is another challenge that keeps producers up at night. While creating and delivering content for one display might be a straightforward exercise, deploying it to 1,000 is much more difficult. It’s not just the mechanics of scale that is a problem. “Varying that content based on triggers that might be happening—there’s different content for different quarters or break periods—while worrying about the performance of the overall network is a huge management challenge,” Farquhar says. Measuring content performance and deployment increases complexity. “When third parties provide content, as in sponsorships, they want to know how that content is being used,” Farquhar adds.

The future of #video #signage and streamed content will focus on seamless interactive personalization—whether on large screens or an individual #mobile phone. @Exterity via @insightdottech

Technology-Driven Solutions

Content managers use Exterity’s IP industry expertise and video digital signage solutions to route content via existing networks and measure its performance at scale.

“We provide a range of management tools, which enables efficient monitoring of all the devices. This allows us to zone devices, target video content, play out schedules based on a whole range of the decisions around location, and so on,” Farquhar says. “These can be very large, very complex systems, and there’s lots of different content that has to be managed, coordinated, and tracked. And our tools facilitate all that for our customers.”

Exterity uses Intel® technology to support video encoding solutions that create content streams for distribution through the networks their customers use. The Intel® Media SDK assures that the video is in the right format for the right device and is delivered effectively.

“Intel® technology is fundamental to the complete end-to-end solution that we provide to support video distribution and delivery to video signage,” Farquhar says.

Digital Signage Video Content Drives Business Operations

That kind of support for video signage distribution has helped organizations in many sectors educate and delight their customer audience. Interactivity and personalization are key here. Exterity works with the cruise industry, for example, and integrates video across an entire ship, down to the guest rooms.

In-cabin screens present an additional opportunity to personalize the content and offer specialized promotions. Systems can leverage data from loyalty programs to tailor content. Video need not be for entertainment alone either. Exterity has also used video signage effectively to train bank employees.

The future of video signage and streamed content will focus on seamless interactive personalization—whether on large screens or an individual mobile phone. While some such personalization is already being delivered, glitches still abound. “The interactive experience is not quite joined up yet,” Farquhar says. “In the future, we will see how more of these technologies blend together to provide a much more seamless and interactive experience.”

Increasing 5G coverage will help solve some of the connectivity challenges—another exciting development to look forward to. Will we ever see a Minority Report-like situation where a Tom Cruise walking down the street will have an entire billboard tailored just for him? Sure, once we overcome privacy issues, Farquhar says, adding, “Proximity is an interesting concept that we’re only just beginning to take advantage of. It opens up a lot of interesting applications.”

Whether through the Chase Center jumbotron or down to your mobile device, the seamless and immersive qualities of video signage will hold your attention and drive operator revenue, or whatever business outcomes enterprises are looking to deliver.

So the next time Farquhar attends a Warriors game, he might receive an alert encouraging him to buy a Steph Curry T-shirt whenever the superstar nets a game-winning basket. Or, if you’re a soccer fan, every time Cristiano Ronaldo nets a goal.

With video signage and content delivery, everybody scores—and everybody wins.

 

This article was edited by Georganne Benesch, Associate Content Director for insight.tech.

Accounting for the Human Factor in Manufacturing Operations

If you are a manufacturing company building a product, you want to ensure the production and quality remain consistent. But how do you account for the human element and its inevitable variability? Advancements in computer vision technology, artificial intelligence, and machine learning now make it possible to pair human behavior analysis with traditional assembly line machine metrics.

Operations become faster, simpler, and more scalable when manufacturers can easily track specific key performance indicators such as total cycle time, throughput, scrap, availability, and changeover time.

“The human factor is the most difficult thing to control. In factories, everyone has their own manufacturing procedures even if they are in the same industry,” says Joseph Huang, Sales Director at Vecow, a developer of machine vision and imaging solutions. “When human behavior analysis is included in the assembly line, manufacturers understand the performance of each operator instead of just a group of operators. With that information, you can address inconsistencies and reward your best workers.”

AI-Powered Analytics Accelerate Worker Performance

The value of being able to detect human divergence from standard operating practices was made evident in an electric motor assembly manufacturing company. The company found failure to comply with procedures was costing them in wasted materials and rework.

The manufacturer added human behavior analysis to its motor assembly line, and was able to accelerate its performance analysis, improve production quality, and identify production process improvements.

It did this with the Human Behavior Analysis Solution from Vecow, which uses AI inference models to detect abnormalities on the production line in real time and prevent costly problems before they occur.

In the past, assembly process operational improvements depended solely on optimizing assembly line machines. But this approach ignored the human impact on the process. By having real-time access to metrics on both machine and human behavior, manufacturers can identify areas of improvement, change or modify human activities and procedures, and enhance production schedules.

“We are helping manufacturers understand the real-time performance of their assembly line. When there is a new inquiry or work order, the production manager will know exactly how to arrange the work order and how to best manufacture it with the best performance,” says Huang.

By having real-time access to metrics on both #machine and human behavior, #manufacturers can identify areas of improvement, change or modify human activities and procedures, and enhance production schedules. @VecowCo via @insightdottech

The solution not only measures the impact of human operators, but it looks at the performance of each individual operator to improve employee productivity. “If I’m an operator and my performance is only being recognized as a group, what is the incentive to work harder? This system allows managers to understand how much time and effort their operators are putting it and reward them accordingly,” Huang explains.

The Vecow Human Behavior Analysis solution can also be used to help ensure manufacturers are complying with regulatory requirements. For instance, in the semiconductor industry, operators must follow wafer production standard operating practices. As part of the of ISO 9001 Quality Certification, they must also submit their manufacturing process data.

“Normally, that data is provided by filling out paper forms or manually submitting numbers online. But since this is not in real time, it becomes outdated and is prone to errors,” says Huang. “Using human behavior analysis, data can be submitted in real time. And in some cases, even send an alert to any potential issues.”

Real-Time Data Reflects Real-Life Performance

While creating models that accurately reflect real-life line production performance in its complex, Vecow’s Human Behavior Analysis solution uses the VHub AI Developer software platform to ease the development process.

The platform includes deep learning, model training, and labeling tool capabilities to enable developers to build AI applications with computer vision capabilities.

The solution connects to cameras at the edge to process data and spot inconsistencies. To ensure personal privacy, the manufacturer manager can access and see only those abnormal activities rather than the whole video, Huang explains.

The use of pre-trained, industry-specific models also eliminates the need for software development skills during deployment. The solution uses a graphical user interface for configuration and customization.

“The biggest differentiator is that this is a no-code AI platform. It is an all-in-one solution for the end user to easily deploy and optimize the deployment process,” Huang explains.

Featuring engineering and training accounts for most of a developer’s time, according to Huang. With Vecow, nontechnical users can leverage cloud-based, auto-labeling capabilities to simplify that process and save a lot of time. In addition, model testing is conducted on the cloud to ensure the accuracy and effectiveness before being ingested by the embedded inference engine.

Powerful Intel® Core i5 and i7 CPUs deliver the computing power necessary to process computer vision data flows. The use of the Intel® OpenVINO Toolkit within the solution dramatically improves the model generation process by converting models to IR files and minimizing the size of the model. “We don’t have to put really powerful computing powers at the GPU, and that helps us reduce the cost of deployment,” Huang says.

The Vecow Human Behavior Analysis solution solves the manufacturer’s inability to efficiently apply numerical analysis to human-centered assembly line processes and inputs. As the manufacturing space continues to improve and streamline operations, access to real-time data will be revolutionary.

Related Content

To learn more about how technology is transforming the retail industry, read Machine Vision Makes Industrial Robots See and Improve.

 

This article was edited by Georganne Benesch, Associate Content Director for insight.tech.