Machine Learning Simplified: With MindsDB

Machine learning is no longer just for the AI experts of the world. With ongoing initiatives to democratize the space, it’s for business users and domain users now, too. Users no longer need to have any programming language knowledge to build and deploy machine learning models.

But democratizing machine learning does not mean data scientists and machine learning engineers are now obsolete. When machine learning becomes simplified, it means less time spent  acquiring the data, transforming the data, cleaning the data, and preparing the data to train and retain models. Instead, they can focus on the core aspects of machine learning like unlocking valuable data and enabling business results.

In this podcast, we talk to machine learning solution provider MindsDB about why machine learning is crucial to a business’ data strategy, how democratizing machine learning helps AI experts, and the meaning of in-database machine learning.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guest: MindsDB

Our guest this episode is Erik Bovee, Vice President of Business Development for MindsDB. Erik started out as an investor in MindsDB before taking a larger role in the company. He now helps enable sophisticated machine learning at the data layer. Prior to MindsDB, he was a Venture Partner and Founder of Speedinvest as well as Vice President and Advisory Board Member at the computer networking company Stateless.

Podcast Topics

Erik answers our questions about:

  • (2:34) The current state of machine learning
  • (7:07) Giving businesses the confidence to create machine learning models
  • (8:48) Machine learning challenges beyond skill set
  • (11:24) Benefits of democratizing machine learning for data scientists
  • (13:39) The importance of in-database machine learning
  • (17:22 ) How data scientists can leverage MindsDB’s platform
  • (19:37) Use cases for in-database machine learning
  • (23:35) The best places to get started on a machine learning journey

Related Content

For the latest innovations from MindsDB, follow them on Twitter at @MindsDB and on LinkedIn.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Transcript

Christina Cardoza: Hello, and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Associate Editorial Director of insight.tech. And today we’re talking about machine learning as part of your data strategy with Erik Bovee from MindsDB. But before we jump into the conversation, let’s get to know our guest. Erik, welcome to the show.

Erik Bovee: Thank you, yeah, it’s great to be here.

Christina Cardoza: What can you tell us about MindsDB and your role there?

Erik Bovee: So MindsDB is a machine learning service, and I’ll get into the details. But the goal of MindsDB is to democratize machine learning, make it easier and simpler and more efficient for anybody to deploy sophisticated machine learning models and apply them to their business. I’m the Vice President of Business Development, which is a generic title with a really broad role. I do—I’m responsible for our sales, but that’s kind of a combo of project management, some product management, kind of do everything with our customers. And then a really important aspect that I handle are our partnerships. So one of the unique things about MindsDB is that we enable machine learning directly on data in the database. So we connect to a database and allow people to run machine learning, especially on their business data, to do things like forecasting, anomaly detection. So I work with a lot of database providers, MySQL, Cassandra, MariaDB, Mongo, everybody. And that’s one of the key ways that we take our product to market: working with data stores and streaming brokers, data lakes, databases, to offer machine learning functionality to their customers. So I’m in charge of that. And also work with Intel®. Intel’s provided a lot of support. They’re very close with MariaDB, who’s one of our big partners, and Intel also provides OpenVINO™, which is a framework which helps accelerate the performance of our machine learning model. So I’m in charge of that as well.

Christina Cardoza: Great. I love how you mentioned you’re working to democratize machine learning for all. I don’t think it’s any surprise to businesses out there that machine learning has become a crucial component of a data management strategy, especially, you know, when all the influx of data is coming from all of these IoT devices, it’s difficult to sift through all of that by yourself. But a challenge is that there’s not a lot of machine learning skills to go around for everybody. So I’m wondering, let’s start off the conversation, if you can talk about what the state of machine learning adoption looks like today.

Erik Bovee: Yeah, I mean, you summed up a couple of the problems really well. The amount and the complexity of data is growing really quickly. And it’s outpacing human analytics, and even algorithmic-type analytics, traditional methods. And also, machine learning is hard. You know, finding the right people for the job is kind of difficult. These resources are scarce. But in terms of the state of the market, there are a couple of interesting angles. First, the state of the technology itself, and core–machine learning model, is amazing. You know, just the progress made over the last five to ten years is really astonishing. And cutting-edge machine learning models can just solve crazy hard real-world problems. If you look at things like what OpenAI has done, with their large GPT-3 large language models, which can produce human-like text or even consumer applications, there’s a—you’ve probably heard of Midjourney, which you can access via Discord, which, based on a few key words, can produce really sophisticated, remarkable art. There was a competition—I think it was in Canada recently—that a Midjourney-produced piece won, much to the annoyance of actual artists. So the technology itself can do astonishing things.

From an implementation standpoint though, I think the market has yet to benefit broadly from this. You know, even autonomous driving is still more or less in the pilot phase. And the capabilities of machine learning are amazing in dealing with big problem spaces—dynamic, real-world problems, but adapting these to consumer tech is a process. And they’re just—there are all kinds of issues that we’re tackling along the way. One is trust. You know, not just, can this thing drive me safely? But then also, how do I trust that this model’s accurate? Can I put my—the fate of my business on this forecasting model? How does it make decisions? So those are, I think those are important aspects to getting people to implement it more broadly.

And then I think one of the things that’s really apparent in the market, and as I’m dealing with customers, are some of the hurdles to implementation. So, cutting-edge machine learning resources are rare, which we said, but then also a lot of the simpler stuff, like machine learning operations, turns out to be more of a challenge, I think, than people anticipated. So the data collection, the data transformation, building all the infrastructure to do this ETL link data, extracting, transforming it, loading it from your database into a machine learning application, and then maintaining all this piping and all these contingencies. Model serving is another one. Your web server is not going to cut it when you’re talking about large machine learning models for all kinds of technical reasons. And these are all being solved piecemeal as we speak. But the market for that is in an early stage. Those are dependencies that are really important for broad adoption of machine learning.

But there are a few, I would say there are a few sectors where commercial rollout is moving pretty fast. And I think they’re good bellwethers for where the market is headed. Financial services is a good example and has been for a few years. Big banks, investment houses, hedge funds, they’ve got the budgets and the traditional approach to hiring around a good quant strategy. They’re moving ahead pretty quickly, and often with well-funded internal programs. Those give them a really big edge, but they’ve got the money to deploy this, and it’s this narrow business advantage for things like forecasting, algorithmic trading are tremendously important to their margins. So I’ve seen a lot of progress there. But a lot of it is also throwing money at the problem and kind of solving internally these MLOps questions, not necessarily applicable to the broader market.

The next are, I would say, industrial use cases. You had mentioned IoT. That’s where I see a lot of progress as well, especially in things like manufacturing. For example, taking tons of high-velocity sensor data and doing things like predictive maintenance. You know, what’s going to happen down the line? When will this server overheat, or something? That’s where we’ve seen a lot of implementation as well. I think those sectors, those market actors are clearly maturing quickly.

Christina Cardoza: So, great. Yeah. I want to go back to something you said about trust. Because I think trust goes a little bit both ways. Here you mentioned how businesses have to trust the solution or the AI to do this correctly and accurately. But I think there’s also a trust factor that the person deploying the machine learning models or training the machine learning models, knows what they’re doing. And so, when you democratize AI, how can business stakeholders be confident and comfortable that a business or an enterprise user is training and creating these models and getting the results that they’re looking for?

Erik Bovee: Yeah. I think a lot of that starts with the data. Really understanding your data, make sure there aren’t biases. Explainable AI has become an interesting subject over the last few years as well. Looking at visualizations, different techniques like Shapley values or counterfactuals to see where, how is this model making decisions? We did a big study on this a few years back. Actually, one of the most powerful ways of getting business decision makers on board and understanding exactly how the model operates—which is usually pretty complex even for machine learning engineers, once the model is trained what the magic is that’s going on internally is not always really clear—but one of the most powerful tools is providing counterfactual explanations. So, changing the data in subtle ways that you get a different decision. Maybe the machine learning forecast will change dramatically when one feature in the database or a few data points, or just a very slight change, and understanding where that threshold is. It’s like, here’s what’s really triggering the decision making or the forecasting on the model in which columns or which features are really important. If you can visualize those, it gives people a much better sense of what’s going on and how the decisions are weighted. That’s very important.

Christina Cardoza: Absolutely. So, I’m also curious, you know we mentioned some of the challenges, a big one being not enough skill set or data scientists available within an organization, but I think even if you do have the skills available, it’s still complex to train machine learning models or to deploy these two applications. So can you talk about some of the challenges businesses face beyond skill set?

Erik Bovee: Interestingly—so, skill set is one, but that’s, I think that will diminish over time. There are more and more frameworks that allow people to get access, just data analysts or data scientists to get access to more sophisticated machine learning features that AutoML has become a thing over the past few years. And you can do a lot, you can go a long way with automobile frameworks, like DataRobot or H2O. What is often challenging are some of the simple things, some of the simple operational things in the short term, on the implementation side. You know, a lot of the rocket science is already done by these fairly sophisticated core machine learning models, but a huge amount of a data scientist’s or ML engineer’s time is spent on data acquisition, data transformation, cleaning the data and coding it, building all the pipeline for preparing this data to train and retrain a model. Then maintaining that over time.

You know, the data scientist tool set is often based on Python, which is where a lot of these pipelines are written. Python’s not necessarily, arguably not very well adapted to data transformations. And then what happens, you’ve often got this bespoke Python code written by a data scientist, and maybe things that are being done in a Jupyter Notebook somewhere, then it becomes a pain to update and maintain. What happens when your database tables change? Then what do you do? You’ve got to go back into this Python and it’s all reliant on this one engineer to kind of update everything over time. And so they—that’s, I think, the MLOps side is one of the biggest challenges. How do you do something that is efficient and repeatable and also predictable in terms of cost and overhead over time? And that’s something that we’re trying to solve.

And one of the theories behind that, behind our approach, is just to bring machine learning closer to the data and to use existing tools like SQL to do a lot of this stuff. They were very—SQL’s pretty well adapted to data transformation and manipulating data—that’s what it’s designed for. And so why not find a way where you can apply machine learning directly, via connection to your database, and use your existing tools, and not have to build any new infrastructure. So I think that’s a big pain point—one of the bigger bottlenecks that we’re trying to solve actively.

Christina Cardoza: So, you touched on this a little bit, but I’m wondering if you can expand on the benefits that the data scientists will actually see if we democratize machine learning. How can they start working with some of those business users together on initiatives for machine learning?

Erik Bovee: Yeah. So one of our goals is to give data scientists a broader tool set, and to save them a lot of time on the operational, the cleanup and the operational tasks that they have to perform on the data, and allow them really to focus on core machine learning. So the philosophy of our approach—we take a data-centric approach to machine learning. You’ve got data sitting in the database, so why not bring the machine learning models to the database, allow you to do your data prep, to train a model. Let’s say, for example, in an SQL-based database, using simple SQL with some modifications as SQL syntax from the MindsDB standpoint. We don’t—we’re not consuming database resources; you just connect MindsDB to your database. We read from the database, and then we can pipe machine learning predictions, let’s say business forecasts, for example, back to the database as tables that can then just be read like your other tables.

The benefit there for data analysts and any developer who’s maybe building some application on the front end that wants to make decisions, algorithmic trading, or, you know, anomaly detection. You want to send up an alert when something’s going wrong, or you just want to visualize it in a BI tool like Tableau is that you can use the existing code that you’ve got. You simply query the database just like you have from another application. There’s no need to build a special Python application or connect to another service. It’s simply there. And you access it just like you would access your data normally. So that’s one of the business benefits, is that it cuts down considerably on the bespoke development, is very easy to maintain in the long term, and you can use the tools you already have.

Christina Cardoza: So you mentioned you’re working to bring machine learning closer to the data, or bringing machine learning into the database. I’m wondering, is this how, traditionally, machine models have—machine learning models have been deployed, or is there another way of doing it? So, can you talk about how that compares to traditional methods—bringing it into the database versus the other ways that organizations have been doing this?

Erik Bovee: So, traditionally machine learning has been approached like people would approach a PhD project, or something. It’s, you would write a model using an existing framework like TensorFlow or PyTorch, usually writing a model in Python. You would host it somewhere, probably not with a web server there are Ray and other frameworks that are well adapted to model serving. And then you have data you want to apply. It might be sitting all over the place, and maybe it’s in a data lake, some in Snowflake, some is in MongoDB, wherever. You write pipelines to extract that data, transform it. You often have to do some cleaning, and then data transformations and encoding. Sometimes you need to turn this data into a numerical representation, to a tensor, and then feed it into a model, train the model. The model will spit out some predictions, and then you have to pipe those back into another database, perhaps, or feed them to an application that’s making some decisions. So that would be the traditional way. So you can see there’s a bespoke model that’s been built. There’s a lot of bespoke infrastructure, pipelines, ETLA that’s been done. That’s the way it’s been done in the past.

With MindsDB what we did is we have two kind of—MindsDB has two components. One is a core suite of machine learning models. There’s an AutoML framework that does a lot of the data prep and encoding yourself. And we’ve built some models of our own, also built by the community. But I forgot to mention MindsDB is a large, one of the largest machine learning open source projects. We have close to 10,000 GitHub stars. And there’s a suite of machine learning models that are adapted—regression models, gradient boosters, neural networks, all kinds of things that are adapted to different problem sets. MindsDB can make a decision looking at your data what model best applies and choose that.

The other piece of this core, this ML core of MindsDB, is that you can bring your own model to it. So if there’s something you like particularly—Hugging Face, which is like an NLP model, language processing model—you can actually add that to the MindsDB ML core using a declarative framework. So, back in the day you would have to, if you wanted to make updates to a model or add a new model, you’d have to root around in someone else’s Python code. But we allow you to do this to select models—select the model you want, bring your own model. You can tune some of the hyper-parameters, some things like learning rate, or change weights and biases using JSON, using human-readable format. So it makes it much easier for everybody to use.

And then the other piece of MindsDB is the database connector—a wrapper that sits around these ML models and provides a connection to whatever data source you have. It can be a streaming broker, Redis, Kafka. It can be a data lake like Snowflake. It can be an SQL-based database where MindsDB will connect to that database, and then using the natural—using the query language, native query language, you can tell MindsDB, “Read this data and train a predictor on this view or these tables or this selection of data.” MindsDB will do that and then it will make the predictions available. Within your database you can query those predictions just like you would a table. So it’s a very, very different concept than your traditional kind of homegrown, Python-based machine learning applications.

Christina Cardoza: And it sounds like a lot of the features that MindsDB is offering with its solution, data scientist can go in themselves and expand on their machine learning models and utilize this even more. So if you do have a data science team available within your organization, what would be the benefit of bringing MindsDB in?

Erik Bovee: This is the thing that I think it’s important to make really clear. We are not replacing anybody, and it’s not really an AutoML framework. It allows for far more sophisticated application machine learning than just a tool that gives you in a good approximation of what a hand-tuned model would do. So it basically, for a machine learning engineer or a data scientist internally, MindsDB, we would just save a tremendous amount of, you know, that 80% of their work that goes into data wrangling. Cleaning, transforming, and coding. They don’t have to worry about that. They can really focus on the core models, selecting the data they want to train from, and then building the best models, if that suits them, or choosing from a suite of models that work pretty well within MindsDB, and then also tuning those models. A lot of the work goes into kind of adapting, changing the tuning, the hyper-parameters of a model to make sure you get the best results, make that much simpler; you can do that in a declarative way rather than rooting around in someone’s Python code. So the whole thing is about time savings, I think, for data scientists.

And then, in the longer term, if you connect this directly to your database, what it means is you don’t have to maintain a lot of the ML infrastructure that up until now has been largely homegrown. If your database tables change, you just change a little bit of SQL—what you’re selecting and what you’re using to train a predictor. You can set up your own retraining schema. There are just lots and lots of operational time- and cost-saving measures that come with it. So it allows data scientists and machine learning engineers really to focus on their core job and produce results in a much faster way, I think.

Christina Cardoza: Great. Yeah. I love that point you made that it’s not meant to replace anybody or data science per se, but it’s really meant to boost your machine learning efforts and make things go a little bit smoother.

Erik Bovee: Yeah. In a nutshell, it just saves a data scientist tons of time and gives them a richer tool set. That’s—that was our goal.

Christina Cardoza: So, do you have any customer examples or use cases that you can talk about?

Erik Bovee: Yeah, tons. I mean, we concentrate. They fall into two buckets. We really focus on business forecasting, often on time-series data. And time-series data can be a bit tricky even for seasoned machine learning engineers, because you’ve got a high degree of cardinality. You’ll have tons of data, let’s say, where there are many, many unique values in a column, for instance—by definition that’s what a time series is—and if you could imagine you’ve got something like a retail chain that has maybe thousands of SKUs, thousands of product IDs across hundreds of retail shops, right? That’s just the complex data structure, and trying to predict what’s going to sell well—maybe a certain SKU sells well in Wichita, but it doesn’t sell well in Detroit. And how do you predict that? That’s a sticky problem to solve because of the high degree of cardinality in these large, multi-variate time series. But it also tends to be a very common type of data set for business forecasting. So we’ve really focused our cutting-edge large models on time-series forecasting. So that’s what we do. We will tell you what your business is going to look like in the future, in weeks or months.

The other thing that we see in the use cases—so it’s forecasting on time series, and then also anomaly detections. So it’s fraudulent transactions, or, is this machine about to overheat. Getting down into the details, I can tell you, across the board, all kinds of different use cases. One very typical one is for a big cloud service provider. We do customer-conversion prediction. They have a generous free-trial tier, and we can tell them with a very high degree of accuracy based on lots of columns in their customer data store and lots of different types of customer activity and the structure of their customer accounts who’s likely to convert to paying tier and when. And precisely when, which is important for their business planning. We’re working with a large infrastructure company, Telco, on network planning, capacity planning. So we can, we can predict fairly well where network traffic is going to go, and where it’s going to be heavy and not, and where they need to add infrastructure.

We’ve also worked on—this is a typical IoT case—manufacturing process optimization and semiconductor. So we can look at in real time sensor data coming in from the semiconductor process. And we can say, when do you stop and go on to the next phase of the process, and where default’s also likely to arise based on some anomaly detection on the process. That’s one we’ve seen working on one project in particular, but we’ve seen a couple like that in pilot phases. Been doing credit scoring real estate, like, payment-default prediction, as well as part of the business forecasting. So, those are all typical, and we, across the board, we see forecasting problems on time series.

One of the actually most enjoyable projects, it’s unique and interesting, but it’s really close to my heart, is we’re working with a big esports franchise building forecasting tools for coaching for video games. For professional video game teams. Like, how would you—what can you predict what the other team’s going to do for internal scrimmages and internal training for their teams? And what would be the best strategy given a certain situation on some complex, like MOBA games, like League of Legends or Dota 2? So that’s something we’re working on right now. They’ve already built the tools in the front end of these forecasting tools. And they’re—we’re working with very large data sets, proprietary data sets of internal training data to help them optimize their coaching practices. It’s an exotic case, but I guarantee you that’s going to grow in the future. So that’s one of the most interesting ones.

Christina Cardoza: So, lots of different use cases in ways that you can bring these capabilities into your organization efforts. But I’m wondering, in your experience, what is the best place to start on democratizing machine learning for a business? Where can businesses start using this? And where do you recommend they start?

Erik Bovee: Super easy. “Cloud.mindsdb.com.” It’s—we have a free-trial tier. It’s super easy to set up and get you signed up for an account. And then we have—God knows how many, 50-plus data connectors. Wherever your data’s living, you can simply plug in MindsDB and start to run some forecasting and do some testing and see how it works. I mean, you can take it for a test drive immediately. I would—that’s one of the first things that I would recommend that you do. The other thing is you can join our community. If you go to MindsDB.com, we’ve got a link to our community Slack and to GitHub, which is extremely active. And there you can find support and tips. And if you’re trying to solve a problem, almost guaranteed someone solved it before and are available on the Slack community.

Christina Cardoza: Great. I love when projects and initiatives have a community behind it, because it’s really important to learn what other people have been doing and to get that outside support or outside thinking that you may not have been thinking about. And I know you mentioned, Erik, in the beginning, you guys are also working with Intel on this. I should mention the IoT chat and insight.tech as a whole are sponsored by Intel. But I’m curious how you are working with Intel and what the value of that partnership has been.

Erik Bovee: Yeah, so that’s actually been—Intel has been extremely supportive on a number of fronts. So, obviously, Intel has a great hardware platform, and we have implemented their OpenVINO framework, whichoptimizes machine learning for performance on Intel hardware. So make great performance gains that way. And on top of that, they just, Intel provides tons of technology and kind of go-to-market opportunities. We work with them on things like this. I’ll be presenting at the end of the month, if anybody wants to come check us out at Intel Innovation in San Jose, I think it’s on the 27th, 28th, 28th, 29th of this month at the San Jose Convention Center. And we’ll have a little booth in the AI/ML part of their innovation pavilion. And I’ll be demoing how we work, running some machine learning on data in MariaDB, which is an Intel partner. Actually MariaDB introduced us to Intel, and that’s been really fruitful. Their cloud services are hosted on Intel. So if anybody wants to come and check it out, that’s—Intel has provided us this forum. So they’re—we’re extremely grateful.

Christina Cardoza: Perfect, and insight.tech will also be on the floor at Intel Innovation. So, looking forward to that demo that you guys have going on there at the end of the month. Unfortunately, we’re running towards the end of our time. I know this is a big, important topic and we could probably go on for a long time, Erik, but before we go, are there any final key thoughts or takeaways you want to leave our listeners with today?

Erik Bovee: I would just urge you to go—I mean, we love the feedback, even if you’re, you know—go test it out. It’s actually fun. MindsDB is pretty fun to play with. That’s how I got involved. I discovered MindsDB by chance and installed it and started using it and found it was just useful in all kinds of data sets and just doing science experiments. We love it. If you take it for a test drive and provide feedback on the community Slack, we’re always looking for product improvements and people to join the community. And so we’d really welcome that—“ cloud.mindsdb.com.” And thanks very much for the opportunity, Christina.

Christina Cardoza: Yep, of course. Thank you so much for joining the podcast today. It’s been a pleasure talking to you. And thanks to our listeners for tuning in. If you like this episode, please like, subscribe, rate, review, all of the above on your favorite streaming platform. And until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Edge AI Advances Sustainable Smart Cities

Cities and towns everywhere want to improve air quality, liveability, and traffic flow. But when it comes to creating solutions, they often operate in the dark.

“Many towns have antiquated methods of counting traffic volume and pedestrian activity, and rely on anecdotal information to make planning, policy, and investment decisions,” says Patrick Mitchell, Head of Smart Cities and Places for SSE Energy Solutions, a division of UK-based energy provider and distributor SSE.

The lack of evidential data makes it difficult to justify spending taxpayer money on green projects that may or may not work as intended or forecasted. And once a new walking trail, access road, or park-and-ride system is up and running, officials lack the means to measure its effects. As a result, many of those in local government are challenged to progress beyond baby steps toward achieving their sustainability goals.

New AI-based optical sensor systems are starting to change the picture. By collecting and analysing data linked to traffic and pedestrians, in combination with information from environmental sensors and other sources, officials can make evidence-based decisions supporting green projects. Once the new solutions are deployed, results can be monitored with outcomes influencing optimisation and change as towns grow and transform.

Smart Spaces Lower Energy Costs, Boost Safety

Though IoT sensor technology is complex, it can be easily deployed on streetlight poles. SSE has been doing just that since 2010, developing remotely controlled streetlights for local authorities, including Hampshire County Council and Southampton City Council. SSE solutions have since been installed on more than 400,000 light poles in the UK and Ireland.

Sensors attached to the poles enable town administrators to remotely turn on, turn off, or dim lights, helping reduce energy costs and lower their carbon footprint. And if a nighttime incident occurs, authorities can literally “shed light” on the problem for arriving emergency crews.

“Replacing anecdotal evidence with hard #data is a very powerful #tool that allows local authorities to substantiate policy changes” – Patrick Mitchell, via SSE Energy Solutions @insightdottech

Building Sustainable Smart Cities

The street lighting control deployment has been followed by the development of the SSE Sentinel optical sensor. Installed on light poles in Cornwall, Slough, and Pembrokeshire, and other locales), optical sensors capture in-depth information about street activity—giving town administrators the tool they need to advance greener planning.

Running on a lightweight AI edge gateway, Sentinel collects and processes detailed images of vehicle and pedestrian traffic. It securely sends select data over a cellular network to the SSE Smart City and Places platform, where it can be visualised and analysed for planning and evidential insights (Figure 1).

AI and city planning considerations use sensors mounted on streetlights collect pedestrian, vehicle, and traffic information.
Figure 1. SSE Sentinel optical sensor collects vehicle, pedestrian traffic, and other data that help design sustainable smart cities. (Source: SSE Energy Systems)

“The data reveals and highlights classifications such as heavy-goods vehicle, light goods vehicle, taxis, buses, motorcycles, e-scooters, etc.,” Mitchell says. “It also provides pedestrian data, which can include flow and movement path.”

To process data-heavy images in near-real time, the solution uses high-performance Intel® processors. Algorithms deployed through the Intel® OpenVINO Toolkit can scrub private details such as facial features and license plate numbers, transmitting only information that towns and cities need.

Customers can combine the sensor data with information about season, time of day, weather, and critically, air quality.

“Local authorities might find a direct correlation between air quality degradation and the number and types of vehicles going through a particular area. Replacing anecdotal evidence with hard data is a very powerful tool that allows them to substantiate policy changes,” Mitchell explains.

For example, many planners focus on revitalising their town’s main commercial streets, where noise and fumes from through-traffic can discourage pedestrians and shoppers. Collecting and analysing data linked to air quality, vehicle traffic, and pedestrian movement can support change. Officials may decide to reroute traffic, lower speed limits, develop green pathways for cyclists and pedestrians, or build park-and-ride systems. “Getting people out of fuel-guzzling cars and into public transport is a major goal for local authorities,” Mitchell says.

Cities can also schedule public works projects to be less disruptive. “If you have evidence that a road closure or maintenance has a direct impact on commerce, you can ask, ‘Was that the right time for this activity, or could we have done it in a different way?’” Mitchell says.

Once a solution is in place, officials can monitor traffic, air quality, and pedestrian activity to see how well it is working. Positive results could increase public confidence and pave the way for more sustainable projects.

AI and City Planning

As environmental and sustainability concerns grow, cities and communities are likely to extend their use of smart spaces, Mitchell believes. For example, sensors in water drainage systems could detect pipe blockages and inform maintenance crews. By combining this data with historical information about floods and weather conditions, city officials could predict the effects of an upcoming storm and prioritise fixes to minimise flood damage.

Predictive analytics and smart technologies could also help authorities roll out smart and sustainable initiatives more efficiently.

“Electric vehicles will replace traditional petrol and diesel transport in the coming years. Planners need to know where to place EV charging hubs and enhance the infrastructure that carries that electricity, because it simply isn’t adequate in all locations at the moment,” Mitchell says.

As cities scrutinise their energy use more closely, they may decide to use artificial intelligence analytics as the backbone of carbon trading systems. “Data and analytics will be vitally important,” says Mitchell. “We believe software platforms will underpin the management and planning of infrastructure in the future.”

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

AI Avatars Report for Duty to Bring Public Spaces Online

Businesses of all types struggled through the pandemic, but public venues like museums were taken completely out of commission. This lasted for more than a year in some places, and even after restrictions lifted, these facilities had to remain clean and safe while patrons overcame their suspicion of public spaces.

As we’ve all come to learn, one of the many paradoxes of pandemic life is that human staffers who once assisted guests can now be a deterrent. This has left public spaces like the Ontario Regiment Museum with no choice but to turn to technology for answers. And the response so far is familiar, but not at all what you’d expect (Video 1).

Video 1. Cloud Constable’s Master Corporal Lana is an AI-powered Animated Virtual Agent (AVA) that serves as a docent and health screener at the Ontario Regiment Museum. (Source: Ontario Regiment Museum)

Meet Master Corporal (MCpl) Lana. She is a health screener who greets visitors, as well as a docent capable of providing information about the museum and its events. She also happens to be an AI.

Or, more accurately, an Animated Virtual Assistant (AVA).

The Anatomy of an Animated Virtual Assistant

Developed by Canadian high-tech startup CloudConstable, MCpl Lana is just one of several customizable AVA characters originally designed to improve user experiences in smart city, healthcare, entertainment, and other high foot traffic use cases. When the pandemic hit, those use cases expanded to include touchless health screening and contact tracing in all types of public spaces.

“We’ve got the largest collection of operational military vehicles in North America,” says Dan Acre, Operations Manager at the Ontario Regiment Museum. “And when I say operational, they drive. We’ve got tanks from World War II to the present day.

MCpl Lana is just one of several customizable AVA characters originally designed to improve user experiences in #SmartCity, #Healthcare, entertainment, and other high foot traffic use cases, @CloudConstable via @insightdottech

“One of the ways we get an important part of our revenue is doing shows on a monthly basis,” he continues. “We have a lot of people come, pay to see the show, go to the gift shop, buy rides on the vehicles. That was our normal way of operating, but during COVID that was extremely restricted by the number of people that could come.”

With a limited number of people allowed in the museum at any given time, MCpl Lana was redeployed from her original role as a museum guide to assist with the check-in process. This meant using the AVA’s sophisticated perception sensing suite comprising an LWIR thermal scanner for temperature screening, microphones that connect to cloud-based speech engines, and two Intel® RealSense cameras for anonymized people counting, facial, and gesture recognition (Figure 1).

Automated Virtual Assistant displayed on screen with Intel® RealSense™ cameras
Figure 1. CloudConstable’s Animated Virtual Assistants (AVAs) leverage two Intel® RealSense cameras as a cornerstone of their perception. (Source: Intel)

The RealSense cameras allow Lana to count guests, scan tickets or QR codes, and capture “air touch” inputs so users can make selections without contacting surfaces. Images and video taken by the cameras feed into a nearby Intel® NUC 9 Pro, where AI inferencing models developed and optimized using the OpenVINO Toolkit are executed in real time to enable even more-advanced functionality.

“If you’re a member of staff and we know who you are, she can actually recognize you by facial recognition, greet you, and run through the required screening questions. And once you’re done, welcome you and say, ‘Thanks for coming. Have a good day!’” explains Michael Pickering, President and CEO of CloudConstable.

“There’s a head pose model as well that we use to know where you’re looking. We’re trying to detect nodding to say ‘Yes’ or shaking your head to say ‘No,’” he continues. “MCpl Lana wants to know if you’re off to the right or left and looks towards you to try to make virtual eye contact. We want to figure out the main person she’s interacting with, and presumably that’s somebody looking at the Animated Virtual Assistant.”

In addition to vision processing, the NUC 9 Pro’s Core processor features Intel® Active Management Technology that permits secure remote management of the AVA platform so CloudConstable engineers can maintain and upgrade the system without having to be physically on-site or train museum staff.

“When COVID came out, it was a perfect system to do screening, ask questions, eventually take the temperature of the people with infrared sensors, and the system was smart enough that when you gave a couple of negative answers, you couldn’t come in,” Acre says.

Animated Virtual Assistants Take the Field

Obviously, the adaptability of the AVA platform from use case to use case is one of its core strengths. Not only can the animated character’s appearance be modified to fit the setting, but what and how it communicates with users can be refined as well. At the museum, for example, MCpl Lana has been tailored to respond to frequently asked questions like whether the gift shop charges sales tax or where the bathrooms or specific exhibits are.

But the museum is also investigating other unique ways to customize the AVA platform for its purposes. These include the introduction of a virtual medal exhibit in which patrons can use the AVA platform to scan Canadian military medals, then have Master Corporal Lana identify and provide historical context about them.

The museum is also working on a massive interactive exhibit that simulates a tank battle run around the actual airfield the museum sits on. As part of this initiative, CloudConstable, in collaboration with architectural design firm Cord Design, is working to contextualize the physical space of the airfield by creating an interactive map you can “drive around on” that includes 3D models of the area’s buildings, surroundings, and topography.

Ultimately, they hope to use the Intel Game Dev AI Toolkit and Gaia ML to populate the map with virtual combatant tanks and create an environment that lets users experience what it’s like to be a tank commander or gunner operating a classic war vehicle. Again, AVA will be there to assist as a guide.

Use Case Flexibility Goes Virtual

In uncertain times, flexibility is paramount. Platforms like CloudConstable’s AVA bring that adaptability to public spaces in an integrated package full of hands-free technology capable of performing a variety of functions that formerly required human staff, and even some tasks even humans can’t perform.

Thanks to off-the-shelf and open-source technology building blocks, flexible AI is now reporting for duty in public spaces everywhere.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Fast Track Innovative Apps: With Arrow and Scalers.ai

Are you struggling between Windows and Linux operating systems? Linux is great for leveraging cloud-native workloads and advanced capabilities such as AI or ML, but most existing workloads and applications are built around Windows. This results in businesses having to make compromises on which capabilities they can bring to their infrastructure and technology stack.

That’s why the Azure IoT Edge for Linux on Windows (EFLOW) was developed to help blend the old with the new and enable Linux-based workloads to run on Windows devices. The solution is already being used to address supply chain issues by modernizing ports with cross-platform capabilities and more.

In this podcast, we will explore the challenges with today’s industry transformation efforts, the role EFLOW plays, and how to successfully implement the platform, as well as necessary partners for IoT success.

Listen Here

[podcast player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guest: Arrow and Scalers.ai

Our guests this episode are Scott Chmiel, Business Development Manager for Cloud Solutions at Arrow, a technology solutions provider; and Steen Graham, Co-Founder and CEO of Scalers.ai.

Scott has worked at Arrow for more than 17 years in various roles, including solutions architect, fields sales, and Microsoft Business Development Manager. In his current role, Scott is focused on helping customers achieve their solutions on the edge and cloud.

Before founding Scalers.ai, Steen worked at Intel® for more than 11 years as a General Manager for various ecosystems, including IoT, edge, and AI. At Scalers.ai, Steen works to unlock industry transformations with the Scalers AI Solution Accelerator Platform.

Podcast Topics

Scott and Steen answer our questions about:

  • (2:06) The state of today’s industry transformations
  • (3:46) The impact of IoT challenges both on a business and societal level
  • (6:08) The different players involved in transformations, for example, smart ports
  • (8:50) How to make innovative and impactful business transformations
  • (10:06) The technology strategy behind industry transformations
  • (12:05) Deploying innovative apps with EFLOW
  • (16:38) How EFLOW was used in port modernization efforts
  • (19:55) The value of EFLOW across all verticals
  • (26:05) What types of partnerships go into these transformations

Related Content

For the latest innovations from Arrow, follow them on Twitter at @Arrow_dot_com and LinkedIn at Arrow-Electronics.

 

This podcast was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Transcript

Christina Cardoza: Hello and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Associate Editorial Director of insight.tech, and today we’re talking about creating innovative solutions without compromise with Scott Chmiel from Arrow, and Steen Graham from Scalers AI. But before we jump into the conversation, let’s get to know our guests. Scott, I’ll start with you. Welcome to the show. Please tell us more about yourself and your role at Arrow.

Scott Chmiel: Thank you very much, Christina. With Arrow for more than 17 years, multiple roles starting at field sales and engineering, moving through my current roles: business development for cloud and edge development, helping customers realize their solutions on edge with Arrow orchestrating and helping where we can. Multiple roles, very focused on Microsoft and other software platforms.

Christina Cardoza: Great. Looking forward to digging in more on where Arrow fits into this IoT, innovative landscape. But let’s introduce Steen as well. Welcome to the show. What can you tell us about yourself and Scalers AI?

Steen Graham: Christina, thanks so much for having me, and I must admit you guys produce some great content and have had some great guests. So, just really thrilled to be here with Scott showcasing some of the great work we’ve done together. And I’m the CEO and co-founder of Scalers AI, and we’re an enterprise AI company that’s focused on deploying AI in the physical world to drive industry transformation and ultimately enrich our lives.

Christina Cardoza: Great. And now something you both sort of mentioned in your introduction is really helping customers navigate through this new digital transformation, intelligent IoT world, and it certainly is a lot easier said than done. There’s a lot of benefits that companies want to get to, but a lot of challenges that are standing in the way. So, Scott, I want to start with you to jump into this conversation. What can you tell us about the current industry transformation efforts? How well have they been applied, and what are the challenges that businesses are coming across?

Scott Chmiel: Well, the first thing that comes to mind is the challenges have changed quite a bit over time. The complexity of solutions has just increased. From days in the past where we were talking about simple appliances or devices that we’re doing, everything contained in the single piece of hardware or software, we’re adding cloud, we’re adding complexity, we’re adding new technologies which not only require more from the technology standpoint, but different skill sets from the development. As you create these solutions, you have to integrate them and deploy them in existing environments or customer environments that differ from one to another. There’s just a complexity of those environments. And, for instance, connected devices now require that additional operational technology security be applied and looked at to make sure that not only is the solution working but it’s secure, it’s not creating vulnerabilities, and, obviously, we can do new things that weren’t possible for the advancement to machine learning and AI, it’s possible to solve new business problems that we couldn’t even address in the past.

Christina Cardoza: Definitely seeing a lot of complexity and challenges on our side too, as we talk to partners on insight.tech, and I think what I see from my standpoint is that this IoT ecosystem, it’s really a partner ecosystem and there’s a lot of players involved. So it’s not only if a business is failing or a business is struggling that can have ripple effects to other issues in the industry or in society. I’m thinking about the supply chain, for instance. So, Steen, I’m wondering if you can expand on the impact that you’ve seen those challenges being, both on a business level, but a society level too.

Steen Graham: Absolutely, Christina, and I think Scott said it well as he kind of outlined the challenges that we have in this physical world in deploying artificial intelligence in the IoT in the physical world to drive industry transformation. It’s really challenges across development and developing these new unique technologies. Deployment and data is incredibly important, and when you look at something like a port, obviously these ports and the infrastructure for ports has been around for decades, and so what you get is you get a mix of existing applications that are working just fine, just fantastic in a port environment, but then you want to implement some technologies that Scott’s really familiar with, like those cloud and native technologies, and how do we actually deploy these cloud-native methodologies, including artificial intelligence, on existing infrastructure, and so can cohabitate with the existing infrastructure and provide these added enhancements so we can do things like analyze the efficiency of the ports, monitor the CO2emissions so we reduce the fossil fuel burn associated with the ports as well. And so this combination of existing infrastructure and new infrastructure, both from a hardware perspective and from a software perspective, and being able to acquire the right data so you can make these near real-time AI decisions, are all very critical in driving these industry transformations and solving some of the challenges we face, for example, in our supply chain crisis.

Christina Cardoza: Great, and I want to get into more about how businesses and companies can actually apply new and innovative capabilities and technologies to some of their existing and legacy technology. But one thing that stands out to me when we’re talking about the ports—this is the part in the supply chain where you’re delivering product and containers, and trucks have to come and offload those from ships, and it’s a huge bottleneck to do that right now, which is causing a lot of delays in getting product out for businesses and end users. And there’s not one single company or business that owns a port like that. So, I’m interested with all of these different players involved in something like a port, which is just one aspect of a supply chain, how can you make innovative and impactful changes when you have multiple people and businesses involved?

Steen Graham: So, I think contextually maybe we’ll start at the macro level. The US federal government administration has been fantastic, and supporting port modernization. They recently passed the infrastructure bill, which is about $17 billion allocated to modernizing our ports. In addition to that, in the newly approved inflation reduction act, there was another—a few billion dollars around monitoring CO2emissions at ports. But one of the really unique things about ports is they’re actually managed by the local municipalities, which is quite interesting. Insofar as you can look at the ports of Long Beach or Los Angeles and see that 30% of traffic goes through Los Angeles and Long Beach—and that’s managed or containerized traffic in the United States goes through these two ports managed by local municipalities. So, what those leaders do locally impacts all of us at a US scale, so that’s where not only those local municipalities and their leadership is critically important, unions are also critically important as well. And so, one of the job roles that is sustained in the United States is crane operations, and what we’ve automated is the front end, actually removing the containers from the ship. But where we really have heavy, invested, human, union-based roles is in loading and unloading those trucks at ports, and so that’s one of the key bottlenecks that you find as well, but those three critical parties: the federal government on igniting the opportunity financially and creating the frameworks, the local municipalities are really the leaders in this front, and then finally the unions are incredibly important. They do occasionally go on strike too, so you can imagine the impact it would have if the unions went on strike in our current challenge as well. And so, that’s really the key players, and obviously I didn’t talk about technology at all, because I think those three angles are tough enough to navigate. Deploying the technology performantly. We’ll talk a little bit more about that, but that’s a broad challenge in itself as well.

Christina Cardoza: So, Scott, I’m interested from your perspective when you take all the challenges that Steen just talked about, as well as the industry business challenges that you spoke about in the beginning, how do you see businesses making impactful transformations and changes?

Scott Chmiel: Well, it’s obviously the challenge to understand what business outcome they’re seeking. That’s often the first step, is what are they trying to accomplish and who are the stakeholders. As Steen pointed out the port of Los Angeles—there’s not just one company, there’s the municipality, there’s the people handling the containers, there are the truck drivers—there are dozens, if not hundreds of subcontractors who all run that and have to dance around to move the ports. When one of those things breaks down, the whole system breaks down. So our solution focuses a little bit on one of the challenges they have there, around safety and just tracking in and out.

Christina Cardoza: Great. And seeing you mentioned tools and technologies as being a part of this, we haven’t gotten too deep into it yet, but we were talking about the ports. We have crane operations, we have different loading and offloading technology already in place—has been in place for years—but at the same time, these governing bodies and businesses want to start adding new and innovative solutions to start tackling and addressing these problems, and that can be challenging when you’ve had these other things in place for years, if not decades. So how do you go about deciding what to replace, what to upgrade, and get over challenges where you just can’t add a new technology or capability because the tools in place aren’t allowing you to do so?

Steen Graham: I think what Scott and I have looked at is how to deploy in a no-compromise way, and from a simplistic operating system perspective, which is foundational to technology, we know that there’s two pervasive operating systems in the world: notably Windows and Linux. And those cloud-native workloads in the modern AI workloads are written in Linux, whereas a lot of existing workloads and applications have been written in Windows. For example, one of the common operating systems for cranes is Siemens SIMOCRANE, which is a Windows-based operating system. For example, there’s Windows 32 applications that are alarm managers to notify the crane operators when it’s safe to proceed based on sensor-based data proximity around them. And so, with that kind of foundational element of no compromise between Windows and Linux, existing applications and new applications, we’re able to retrofit these modern AI applications on existing infrastructure and make sure that they work better together. That avoids the long conversation, Christina, that you alluded to, of when to upgrade our infrastructure, what to keep, what to remove. All of the retraining associated with employees, as well, on that digital transformation effort ultimately becomes a multiyear effort if you start replacing existing applications that are working just fine today. So layering on the modern, cloud-native attributes and AI capabilities was really the approach we used in this solution.

Christina Cardoza: So, when we’re looking at adding cross-platform capabilities to some of these technologies and looking at the port-modernization efforts, as an example, getting Linux on cranes instead of the traditional Windows, or being able to use both together—how do you actually go about that? What is driving that cross-platform interoperability? Scott, if you want to answer that one.

Scott Chmiel: Well, a lot of times it’s the existing hardware, and I’m going to let Steen answer it a little bit after. The example we use for EFLOW was a specific issue we identified, or challenge that can be addressed, but the technology, the infrastructure—whether it’s the hardware or it’s the codebase—can be applied to many different solutions depending on where you are in the port, where you are—whether it’s a retail application or within a smart port—there’s many different places—or in a warehouse. It’s all the same types of challenges or same technology can be used and can be customized or repackaged. And retraining AI, they have different Windows applications, and the integration that’s being done—that’s the opportunity for SIs or the people building the solution, is bringing this off-the-shelf technology and building solutions they couldn’t do anymore for the specific vertical, for the specific problem they have or identify. It’s bringing additional value to the existing hardware they have, adding value with things they couldn’t do. And in the particular example we did with the smart port was adding safety. One of the most significant things is the amount of people and how things fast things have to move. A mistake is critical. It’s devastating if a mistake is made. If somebody’s standing where they’re not supposed to be. And that that’s an example we try to illustrate. Obviously, in retail it might not be as critical as a fault. But then again, some people aren’t supposed to stand something—or stand someplace, or before a crane moves through a warehouse making sure this place is void. Anything you can attribute to safety is value, and ultimately a cost benefit for the customers and their customers. Steen, would you like to add a little bit to that?

Steen Graham: I think just to expand upon that is the gift that we were given, notably by Microsoft and Intel, is that underlying technology, which we use the acronym EFLOW, which is Edge for Linux on Windows. It’s now accurately described as, I think, Azure IoT Edge for Linux on Windows. Quite a mouthful, but the reason the mouthful is important is it gives us that no-compromise capability across Windows and Linux. I mean, that’s something that’s really unique to running applications that are just running great—existing applications and these modern applications. And so that’s been fantastic, and maybe the hidden gem from Intel is that Intel invested in hardware-acceleration capabilities via its integrated-graphics capability that allows us to do these workloads on deployed Intel-based CPUs today without having upgrade to expensive GPS. Why that’s so unique is typically you’re abstracted from access to the integrated graphics if you’re running an AI model and Linux via Windows, but that was a really hidden gem that Intel produced, and now we can run multiple AI models, multiple camera feeds on affordable, off-the-shelf technologies like Intel’s net platform, and Windows and Linux as well. So, an incredible array of technology that allows us to deploy these modern workloads and make sure they’re interoperable with existing infrastructure.

Scott Chmiel: And I’ll add one more thing too, is the investment companies have done in their Windows architecture, their infrastructure, they have people who understand Windows. They have people who manage those Windows devices. When you bring something into an existing application OT, there usually is an IT element there managing those devices. The value is you’re still utilizing that skill set, that expertise you bring to the table, and now you’re adding that modern, cloud-based, machine learning AI, so they’re leveraging additional value on top of their skill set. So, once again, all the investments that would’ve been made can be reused and can be built on top of.

Christina Cardoza: Now, I should mention the IoT Chat, and insight.tech as a whole, is owned by Intel, so always love hearing about what Intel is doing and how they’re helping others in the industry. But I want to expand a little bit more on EFLOW—what the process was getting this into the port system, how hard, or where the initiative came from, and then the benefits that we’ve seen in the supply chain because of EFLOW. Steen, I’ll throw that one to you.

Steen Graham: So, in regards to the port system, it’s quite an extensive, multiyear process of RFIs and RFPs. So, this was technology—the EFLOW technology—was released just late last year, so we really built this solution within the past year and are still in the engagement phase on a number of opportunities with ports to demonstrate in RFP and RFI mode. So, even though the technology’s highly ready to solve some of these challenges, as you alluded to earlier, the multilayered decision making process is what you kind of run into on these sophisticated situations, and so that’s certainly something we’re working through. From a business-outcome perspective, the problem that we were trying to address is the bottleneck associated with the turn times: the operational-technology metric of how fast the containers can be loaded and unloaded. You may have seen at the height of the crisis you would find many times the truck drivers blaming the crane operators for being unfamiliar with their workstation because the union assigns them a workstation based on their seniority on a daily basis, and as COVID was in place as well, it’s even more dynamic union positioning. Meanwhile, the crane operators were not happy with the truck drivers because they were no-showing about 50% of their appointments, and because of the goods shift in US consumption from services to physical goods, we had peak traffic of containers as well. So you’ve got more people than ever on the port. And so the business problem that we’re trying to solve is how do we optimize those turn times of those cranes? How fast can they be unloaded and loaded? How do we make sure the truck’s in the right place at the right time, and efficiently does that while providing enhanced safety experiences for the workers on site as well. And we also are tracking CO2 emissions based on the fossil fuel consumption of those cranes. Although, I should say there is a mix between hybrid cranes as well as diesel cranes, so many ports have a nice mix—like, for example, the Port of Houston has about one-third hybrid cranes and two-thirds diesel cranes. And so that’s another metric that we’re tracking, is how efficient are those hybrid cranes? Are the operators trained and familiar with that as well?

Christina Cardoza: So, a lot of the things that we’ve been talking about—this problem between Linux versus Windows, and legacy technology versus new technology—we’ve been talking about these in regards to ports and smart ports. But these sound like challenges that every industry deals with in some shape or form. So, Scott, I’m wondering where else do you see EFLOW being used outside of ports, and what other challenges or benefits is EFLOW looking to solve and giving to businesses?

Scott Chmiel: Well, I think there’s, depending on who the audience is, it’s going to be different verticals they’re focused on. I know there’s a strong focus on the retail from both Microsoft and Intel: the opportunities there to do workload consolidation. I think the example we’ve shared or talked about it a little bit is a consolidation of surveillance and point of sale, where one machine could do both or, once again, new services you couldn’t do before. Now what you have, a visual element with a transaction, what kind of value can you generate out of that? What kind of benefit to the business, benefit to the customer? Obviously the transportation, when we talk about smart ports, we can expand on that and say transportation in general—whether it’s warehouse, workflow management, the smart port, we’re talking crane—but there’s a lot going on in ports of entry and warehouses, and if you think about just a logistical hub where trucks are coming and going, and how much volume of material is exchanged between trucks or put on shelves. There’s the accelerator—we did both talk about efficiencies—but also safety using AI—Visual AI to, for instance, detect where people shouldn’t be—giving warnings: things are moving fast; things are big that are moving; people and those machines don’t always interact very well if they’re in the wrong place.

So there’s lots of opportunities, and I think transportation, industrial and retail are a lot of them. I’m sure somebody innovative in a different vertical, like medical, has applications as well. But the great thing is the people who understand those industries, the people who have the IP and the investment in those industries, they understand the solutions they’re trying to solve, and a lot of that—the code, the underlining technology—can be repurposed for those verticals. And, once again, a lot of the work’s already done for them with the accelerators and the tools that Microsoft and Intel with OpenVINO have provided to be leveraged for their use. It’s pretty exciting, what they can do, and I think if you get Steen and I going, we can talk about and get pretty excited about, and start making up solutions, but I think the end customers, the people who live and breathe in those industries, know the kind of problems, and, like I said, being aware of the new technologies and what EFLOW can do, they’re going to start coming up with some pretty exciting ways to apply it from very simple, solving simple problems, to more complex problems they might have on their sites or locations or their vertical.

Steen Graham: Just to maybe extend Scott’s thoughts on a couple of these industries, and maybe starting with healthcare where he left it, is if you look at medical-imaging equipment, such as ultrasound, a lot of ultrasound vendors are Windows-based applications, but they’re looking to add new, AI-based features. And so, for example, to make sure that mother and child are safe through pregnancy, you can look at the fetal position of the baby as it exits the womb. And that’s something that you can take an existing Windows-based ultrasound equipment with, and then overlay modern deep-learning capability as well. Another example is anesthesiologists occasionally have challenges finding the veins on folks, and it could be a material difference if you obviously don’t hit the vein correctly. And so you can use ultrasound equipment—again, Windows-based with modern deep-learning skills—to determine the accuracy of the vein, and then pinpoint the associated anesthesiology inputs with that one as well. So that’s a couple of use cases in medical where you have this demand to run—my existing ultrasound works great on Windows, and now I’d like to overlay some modern deep-learning capability as well.

And so that’s a good use case in factories—I think in manufacturing Windows is pervasive, but we’ve also seen just an incredible demand around things like computer vision to do defect detection in line in the manufacturing-process flow, and I think it’s an incredible use case. And I think perhaps kind of the hidden cool factor in that computer vision for quality detection use case, if you do in-line AI defect detection, you can actually find products that are essentially having quality issues earlier in the manufacturing flow, which is nice because of course you can address product-quality issues. But simultaneously, if you address that earlier in the flow, you use less fossil fuels to actually run through the rest of the process; you use less raw commodities as well, and so there’s a sustainability effort that’s quite impressive in doing some of those use cases as well. And again, huge install base of Windows and manufacturing facilities as well, so across transportation, healthcare, manufacturing, there’s really some incredible opportunities to pair modern Linux-based, cloud-native apps and AI on top of Windows via this technology. So, kudos again to the teams developing it and continuously updating it at Microsoft.

Christina Cardoza: Absolutely. I love those examples because it really shows you these benefits go beyond businesses and really have a ripple effect to society. Now, you guys have mentioned Intel and Microsoft ,and you’re from Scalers AI and Arrow, so, I’m curious about the partnership that goes into this. Typically businesses, you don’t think that they synergize or work well together, but really this IoT landscape—and when you look at supply chain as another example—it really takes a team effort. So how are you guys working together, and working with some of those other partners you mentioned: Microsoft and Intel?

Steen Graham: I think Arrow is kind of a natural fit for working across partners and solving these multipartner solutions, because they’re one of the leading solution providers in the industry. And Arrow is always, I think, looking to figure out how they can make one plus one equal three across their partnerships. And that’s really where Scott actually came to us with an incredible idea about showcasing the value of this underlying technology, which is quite primitive relative to deployment with a real business outcome, and we were able to take those technologies from Intel and Microsoft, a number of open source projects as well. So there’s many other we could modern—we would call them modern microservices in many cases: full open source projects as well, so I have a hard time calling them a microservice because they’re a macro set of code. But we were really building upon numerous open source projects, numerous technologies from Intel and Microsoft and others, and building that solution code and where Scalers fits in is really understanding how to fit all those things together into a solution and providing that high-fidelity enterprise AI solution, as well as building the custom AI models for deployment. And so Scott, would you like to add anything on?

Scott Chmiel: Arrow’s one of those companies that, unless you’re in electronics, a lot of people don’t know who we are. We’re over $30 billion in 2021, and we cover quite a wide breadth of different things Arrow does, from the enterprise to components and everything in between. The thing that Arrow focuses on, and the intelligent solutions of Pacific group that I’m in, is we call ourself an orchestrator and aggregator, bringing the different technologies—the trusted advisor role orchestrating different partners. Because, once again, we talked about the complexity—it’s hard for one company who has a vision or a challenge to be able to necessarily have the resources, the skill sets in house to do everything for an end-to-end solution. Obviously they might have a component—that might be their IP, their technology, a device, a sensor, something that does something really well—but, as I said, the solutions, especially in operational technology, are wide and deep within the end use.

So what Arrow looks to do is work with the end user, the prime whoever’s coming up with the solution, and bring in appropriate partners. Those could be technology partners, they can be existing technology-solutions systems, the compute from Intel, different form factors, the IO—and if the customer has something which doesn’t exist in the market, help them build it, help them pick the right solutions, not only for their end use, but looking at the longevity, the overall life cycle of that solution. Smart ports—that’s not something that’s going to be deployed and done in a couple of years. And, once again, you don’t want something on each crane being different. That’s the important point, something that’s repeatable. And, obviously if you can do it in smart port—Port of Los Angeles—I’m sure you want to move it down to the Port of LA—move it to, excuse me, Port of Houston, other locations, if you can reuse that. There’s even inland port. We think of ports as next to water—no, there’s a lot of hubs for transporting and redoing containers throughout the US, and where they distribute from the rail systems, from the trucking systems—if you can reuse it, the company who’s developing that solution or who is bringing these pieces together can reuse it, and, once again, create more scale, create more value across the ecosystem.

So Arrow works on orchestrating, bringing the appropriate partners when a company deploys a solution. I’m going to go back to operational technology—security’s a huge thing. There are a lot of options in what you need to do to deploy into a system—making sure the sensors, the gateways, all the devices are secure in that operational technology. Because, keep in mind, it might be not just their network, it might be on a facilities network. How are you securing those devices? How are you making sure that data’s safe? That the devices aren’t vulnerable to people with devious intentions. And, of course, just making sure it works. That’s very important. So, once again, Arrow is orchestrating—whether it’s services, whether it’s components, whether it’s helping them with design—helping them make the right technology solutions. That’s really important. Talking about Intel—the long relationship we have with Intel, there’s a lot of products to look at. There’s a lot of ways to do things. We want to make sure that the customers are educated and selecting the right product for their solution, for their usage model.

Christina Cardoza: Great. Well, I can’t wait to see what else comes out of your partnerships with each other and Intel and Microsoft, as well as where else EFLOW is going to make some of these innovative transformations for the industry. But, unfortunately, we are running out of time on the podcast. Before we go, I know we’ve covered a lot, I just want to throw it back to each of you quickly if there’s anything that we didn’t go over that you think it’s important for our listeners to know, or any final thoughts or key takeaways you want to leave them with today. So, Steen, I’ll throw it to you first.

Steen Graham: Thank you, Christina. Well, it’s been a pleasure talking with you and Scott about this transformative technology. I think, to the listeners, I think as we talk about the cost of development and software engineering and really solutioning these, it’s incredibly important that we write the code to integrate these partnerships. And there’s so many incredible companies with great technologies, but what many times is missing is the single line of code that connects the APIs to really drive transformation as well. And so I think that’s really important that we take that step to fortify our partnerships so we can drive these industry transformations. And, as an industry, we really got to come together on the deployment challenge, because building capabilities in the cloud is fantastic, and it’s really affordable and easy to do these days. And so, Applied AI is affordable as it’s ever been, it’s democratized as it’s ever been. But where challenges occur is deploying it in the physical world, and the continuous learning, the transfer learning, the continuous annotation requirements to do that. And so, I think as an industry we’ve got to continuously focus on deployment and DevOps capabilities as well. And finally, I think, although, we’re getting really good at synthetic data and creating AI models with small data sets, if we want really want to move society forward, we have to be able to build models with high fidelity on good data sets and do it in a way with explainable AI so we know why the AI is making its determinations, which is very frequently required for mini RFPs these days because we want to make sure we know why the artificial intelligence is coming to its conclusions to make sure it’s as inclusive as possible and accurate as well. So those are certainly the three areas that I would emphasize here. You’re fantastic listeners here, Christina, and it’s been a pleasure. Thanks for having me.

Christina Cardoza: Great points, yeah, absolutely. And, Scott, any final thoughts or takeaways you want to leave today?

Scott Chmiel: I’m always amazed when I talk to companies in specific verticals—whether it’s somebody running a warehouse, somebody in a port, somebody in surveillance or whatever, the medical industry—the amount of knowledge they have with what they do, their particular industry—their particular solutions are amazing. And as these solutions get more complex, I want to make sure people understand there’s no need to go alone on some of the more complex solutions. It’s no longer the days of building a device that does one thing. It’s not an MRI which just does visioning; it’s how it integrates with that hospital. It’s how additional leveraging, additional technology—whether helping the visual inspectors for all that vision scanning, finding—being zeroed in certain things. And let’s talk honestly, with machine learning AI you’re not only bringing the expertise of one person; it’s how you train that and the thousands of models and thousands of things you can leverage. Companies don’t need to do it alone. They really can’t do it alone when they do more complex solutions, and Arrow has a lot to do that they can leverage to help them kit that end-to-end solution. And, once again, learning about the technology, learning about them, them learning about what can be done, what they hadn’t thought about doing, or the thought was outside the scope of what they can actually accomplish in their solution. The bar is moving down for what can be done, and it’s amazing—business solutions that couldn’t be solved in the past can be. It’s just getting the right partners, technology, hardware, ready-made solutions to be available for you to look at and to leverage, get your time-to-market faster, and it can save you money on—once again, why reinvent the wheel if there’s a piece of your solution that you can reuse from somebody else? It’s the final solution that’s brought together, that’s the important thing, and most likely the customer that knows that vertical is trying to solve the problem. They’re the experts on that, but let’s help them get that, and solve those problems with what’s out there.

Christina Cardoza: Yeah. I love that, that last thought: the bar is getting low for what we can do, but expectations are high for what others think we should do. So having the right partners is definitely going to help you be successful creating these new and innovative apps. So with that, I just want to thank you both for joining the podcast today. It’s been a pleasure talking to you. And thanks to our listeners for tuning in. If you liked this episode, please like, subscribe, rate, review, all of the above, on your favorite streaming platform. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Balanced Approach Gives Semiconductor Equipment New Life

Semiconductor manufacturing equipment is some of the most expensive in the world, with the average price of each piece of machinery easily exceeding $1 million. Certain instruments, like extreme ultraviolet (EUV) scanners, can potentially cost hundreds of millions.

As you can imagine, semiconductor manufacturers want to protect these investments. And if the initial price tags weren’t enough of a reason, secondary markets have formed around these commodities to extend their operational life for decades after initial deployment.

Obviously, these two factors incentivize semiconductor equipment owners and manufacturers to keeping this machinery running optimally for as long as possible.

This desire has created an interesting dynamic in the electronics industry, where some of the most advanced technology in the world is produced using equipment that’s many generations old. Of course, these systems can be upgraded just like any other electronic system. But as they age, retrofitting semiconductor manufacturing equipment to keep pace with today’s performance standards becomes a larger and larger task.

For example, adding state-of-the-art electron beam or EUV instrumentation to a decades-old lithography machine enables fabrication of chips with miniature silicon transistors. But it also requires a massive increase in system throughput to support passing large amounts of data between the equipment’s control units on tight, deterministic schedules.

Moving Semiconductor Manufacturing from Last Gen to Next Gen

To offset the cost of perpetual upgrades, semiconductor equipment manufacturers have designed control systems around industry standards like VME for many years. First developed in the early 1980s and available from multiple vendors, the VME standard defines a consistent PCB, connector, and signaling system that allows existing boards to be substituted for new ones when damage occurs, or upgrades are required.

“You must balance 30-year-old VME #technology with the newest #CPU technology. This is the tricky thing our team is doing. Basically, they’re putting brains of super geniuses in the bodies of dinosaurs.” – Luca Varisco, Advanet via @insightdottech

In theory, this means a legacy VME control board can be substituted for one with more performance to handle the higher bandwidth required by modern semiconductor manufacturing apparatuses. But making the most of this performance requires more than just selecting a board with an advanced processor, slotting it into the system, and calling it a day.

Acquired in 2011, Advanet is a branch of the Eurotech Group that specializes in ODM services for cutting-edge systems like semiconductor manufacturing equipment. Headquartered in Japan, it works directly with many of the world’s leading semiconductor equipment manufacturers to help integrate new processor technology, legacy hardware, and communications protocols—and some of the most sophisticated instrumentation on Earth.

“In a way, you have a two-speed machine,” explains Luca Varisco, Head of Product Marketing at Advanet. “There are some parts of the machine that stay around for 30 years. At the same time, there are things that are very, very advanced and change frequently to follow the decreasing wavelengths of ultraviolet light. The shorter the wavelength, the more control you need of the machine that focuses the ultraviolet beam.

“Because of that, the CPU must also advance,” he continues. “So you must balance 30-year-old VME technology with the newest CPU technology. This is the tricky thing our team is doing. Basically, they’re putting brains of super geniuses in the bodies of dinosaurs.”

A big part of that balancing act involves the I/O subsystem, which must remain rigidly deterministic but evolve with the CPU to prevent data bottlenecks from forming between low-speed interfaces and high-performance processors. But at the same time, the entire control mechanism must stay within the power consumption and thermal dissipation envelopes of systems that predate the 2000s.

The Chips that Make the Chips

Along with operational stability and long-lifecycle support, these were key requirements in a project Advanet accepted to upgrade the control subsystem of an electron beam (e-beam) lithography machine for a leading semiconductor manufacturing equipment provider.

The primary responsibility of the particular control subsystem is to focus lenses that concentrate the lithography system’s e-beam onto a target substrate. Doing this with nm-scale precision requires extreme computational horsepower, as well as coordination of multiple control endpoints around the machine. Of course, there are countless legacy components around the machine that must interface with the control system as well.

Looking to blend the old and the new, Advanet started developing a solution around new Intel® Xeon® D-1700 processors (codenamed “Ice Lake-D”) that would eventually become the Advme8088. A 6U, 1-slot-wide VME board, the Advme8088 features Xeon D-1700 processors with up to 4 cores but TDP ratings no higher than 50 W.

Just as important, the Advanet card includes the trusted VME connector alongside modern interfaces like Serial RapidIO Gen 2, PCI Express Gen 3, and multiple Gigabit Ethernet ports.

The lithography control subsystem in question is backplane-based and consists of a VME chassis with eight to 12 Advme8088 cards, two or three custom optical boards also developed by Advanet, and a power supply. The Advme8088 and other cards plug into the backplane using VME connectors, which transmit signals between the boards and legacy components in the machine. An FPGA on the backplane itself serves as a flexible VMEbus controller that protects against obsolescence and provides some signal preprocessing.

Meanwhile, the Advanet card’s newer interfaces are what maximize the capabilities of its onboard Xeon D-1700 processor. In the lithography subsystem project, for example, Intel® Time Coordinated Computing (Intel® TCC) supported by the Xeon D-1700 and configured to work with the Advme8088’s GbE interfaces synchronizes Ethernet packets flowing between control endpoints with sub-200 µs latency.

This same control network manages the position and orientation of electron beam lenses as well as other step stages on the machine.

New Life for Legacy Lithography Equipment

Despite seemingly insatiable demand for performance, their position at the heart of many control systems will keep VME products in semiconductor manufacturing equipment for the foreseeable future.

Of course, no matter how creative your engineering team is, all technology—even those based on standards—eventually falls victim to obsolescence. You can find evidence of this in the lithography control system’s backplane FPGA, which serves, in part, as an alternative to VMEbus controller chips that are being supplied by fewer and fewer vendors while becoming more and more expensive.

For this reason, the Xeon D-1700 processor is available in extended-temperature, long-lifecycle embedded variants supported for seven years or more. For its part, Advanet commits to supporting its solutions for decades.

“It looks strange because you see VME and say, ‘30-year-old technology? What’s that?’,” Varisco explains. “But actually that’s exactly what’s missing in the semiconductor market because you can’t throw away.”

And it turns out, you wouldn’t want to. After all, when you consider the primary and secondary markets, it’s part of a multimillion-dollar machine.   

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

SIs Deploy Interactive Digital Signage with Ease

AI, machine learning, and the IoT have created a retail technology marketplace in which virtually anything is possible. There’s just one problem: An abundance of technology makes connecting the dots a challenge. Today’s systems integrators need to assemble complex solutions for their clients with support services that enrich the relationship. Hardware and software providers, on the other hand, need help finding new uses and markets for their products.

Enter the matchmakers of innovation: modern technology distributors. Using their deep product knowledge and expertise, they unite interactive digital-signage SIs with manufacturers and ISVs, helping each grow their business by gaining access to an ecosystem of needs and solutions.

“Today’s reseller has to have a tech stack beyond what they had 5 or 10 years ago,” says Dean Reverman, Vice President of Marketing at Bluestar, Inc., a specialty electronics distributor. “They have to walk in the door with a broader number of solutions sets. How do they do that? Smart systems integrators lean on value-added distributors.”

While traditional distributors pick, pack, and ship, Reverman says value-added distributors move beyond the basics by leveraging insider knowledge.

“Our unique disposition gives us optics,” Reverman explains. “We understand what value-added resellers are going through on a day-to-day basis, because we’re interfacing with them on a day-to-day basis. We understand what product vendors are trying to accomplish with their strategic goals and their product developments. And we understand what the software community is bringing to the table by creating unique solutions they want to get into the ecosystem.”

To facilitate matches, BlueStar launched its TEConnect program, bringing together hardware and software providers to generate new solutions, such as digital-signage kiosks and interactive digital signage.

“One of the things we try to do is tap into the software community and enable them in the channel,” says Reverman. “And a lot of what we distribute would be a doorstop if it didn’t have a piece of software on it.”

Intel® plays a major role in BlueStar’s matchmaking process. “We now have a partner when our customers come to us with early-stage solution development,” says Reverman. “This is so critical for IoT. Having the ability to lean on Intel’s engineering prowess is how some of those solutions get built. With our TEConnect program we’ve built a lot of camaraderie with Intel, which is how we bring software development companies into the channel and enable them to sell their products.”

“While traditional distributors pick, pack, and ship, value-added #distributors move beyond the basics by leveraging insider knowledge.” – Dean Reverman, @Think_BlueStar via @insightdottech

Creating Digital-Signage Kiosks

A good example of the ecosystem at work is BlueStar’s partnership with Elo Touch Solutions, Inc., a leading manufacturer of interactive touchscreens. The two companies work together to create and promote innovative industry-specific solutions. One such product is Appetize, a point-of-sale system made specifically for stadiums that process guest transactions and track inventory at scale. The solution includes fixed and portable terminals, kiosks, and handheld devices.

BlueStar assembles the components that include Elo Touch screens, stands, and printers. Then the distributor takes it a step further, offering installation services that help SIs deploy the complete solution.

“With Appetize, for example, you could be deploying hundreds of thousands of units to locations,” says Karey Linkugel, Business Development Manager for BlueStar. “That takes time, and someone is going to have to put it all together. Before the product even leaves our warehouse, BlueStar takes the Elo screen, puts it on a stand, runs the cabling, and installs the software. Then we put it in a box and ship it to the location.”

And to generate demand, BlueStar and Elo Touch combine marketing dollars to run campaigns and grow their businesses.

“Manufacturers like Elo want to work with somebody who knows the business and the technology and who has an experienced sales team that can hold their hands and run down the street,” says Linkugel.

“We value the opportunity to consult one another as the environment changes, and determine what fits best for both companies,” adds Kim Davis, Senior Director, Channel Sales for Elo Touch Solutions. “Their value is tremendous for our business as they manage it very well and continue to grow our product line.”

After the Match

In addition to bringing together hardware and software providers, BlueStar supports its customers with custom configuration, financing, marketing, and service support.

“Resellers are cash strapped to a certain degree,” says Reverman. “They go out and win deals, but they don’t always have the ability to finance these opportunities. We offer unique financing options that enable resellers to go to market with solutions.”

Distributors also gain value from the relationships, including the ability to offer dynamic new ways to package solutions. And they can promote the innovative new uses to market and grow their own businesses.

As tools and use cases become more sophisticated, value-added distributors like BlueStar are the retail solution experts, offering the right technology and services for SIs, helping them gain access to new markets and opportunities.

“We’re helping our partners with the latest proven technologies—opening doors so that they expand upon the business that they’re doing today,” says Reverman. “Maybe it’s machine vision, maybe it’s back-of-house tagging and tracking assets. Whatever it is, our job is to make it happen through our ecosystem. It’s like a Swiss army knife, enabling solutions and taking them to market.”

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

AI and Machine Learning Transform Cancer Treatment

Screening for lung cancer—the second-most common type of cancer worldwide—is a complex process. Doctors use Low-Dose Computed Tomography (LDCT) to scan patients and produce hundreds of 2D images. Physicians review them to identify the location and volume of tumors, which they then evaluate in context of the patient’s medical history, lab work, biopsies, and other information, all of which help determine the stage of the illness and the best course of treatment.

LDCT is an essential tool in fighting the deadly disease, but it’s also a slow, painstaking process that leaves room for manual error. A new approach uses edge processing, AI, and secure data sharing to help doctors arrive at an accurate diagnosis much faster and start treatment sooner. Over time, it could improve understanding of lung cancer and other diseases, and spur development of more effective therapies.

As medical #AI #technology improves screening efficiency, suspected cancers can be identified earlier prompting treatment #workflow to begin sooner. @IPC_aewin via @insightdottech

Elevating Rates of Detection

The LDCT AI assistant solution, developed by network hardware and edge server producer AEWIN Technologies Co., Ltd., uses three advanced technologies in concert to produce faster results. The most important is fast computing power. Based on 3rd Gen Intel® Xeon® Scalable Processors, the AEWIN SCB-1932C edge server can process hundreds of LDCT images on-site in near-real time.

“To process that many images, the CPU must be extremely powerful,” says Benjamin Wang, Director of Sales and Marketing at AEWIN. “And some Intel® chips also contain built-in security features to help protect medical data from hackers.”

The AI assistant platform uses the Intel® OpenVINO Toolkit to analyze patient LDCT images. Through large amounts of scans the system can quickly reduce the number of images—from up to 600 to just a handful—a doctor needs to consider.

“AI applies inference to detect abnormal lung nodes and categorize them, so doctors only have to examine high-priority scans, which helps to improve the efficiency of the diagnosis,” says Tiana Shao, Product Marketing Manager at AEWIN. “Medical AI architects convert from hundreds of supported AI frameworks that easily run on the AEWIN Edge Computing platform—providing significant improvement in performance.”

As medical AI technology improves screening efficiency, suspected cancers can be identified earlier, prompting treatment workflow to begin sooner. Increasing such availability of AI systems can accelerate development of early-detection solutions and can lower total cost of care.

Improving AI Models

Recently, AEWIN started using a new platform that could increase effectiveness of detection, better predict the course of disease, and lead to improved treatments.

For years, a web of disparate and evolving patient privacy laws has stymied medical professionals’ ability to pool and analyze their data. With the new Qisda Federated Learning Platform, hospitals around the world can securely share important AI model parameters without transmitting any sensitive personal patient information.

This enormous influx of data will greatly benefit AI models, which depend on analyzing vast data sets to improve their capabilities. “With open and secure federated learning running on AEWIN-based IT infrastructure, hospitals can build and scale better models together,” Shao says.

Machine learning also requires diversity to reduce bias. When hospital data is siloed, geographic and demographic range is limited. Even the type of medical instruments doctors use can affect results. AI models on a growing pool of diverse data will result in continuous improvements in accuracy. As AI systems correlate medical procedures with results on a large scale, they will help doctors better understand disease progression and decide which treatments are most effective in various scenarios.

AEWIN plans to deploy the Qisda platform and its smart-imaging solution at two major university hospitals in Taiwan, along with several smaller local facilities. Procurement of new AI infrastructure is often a challenging investment for many medical institutions. Leveraging idle compute on existing IT infrastructure can alleviate high CAPEX for medical AI development. Alternatively, Qisda offers cloud solutions that can be implemented on-premises at medical centers. For local hospitals that can’t afford high-performance equipment, the business model leases the solution instead of buying the whole system.

“Medical AI paves the need for modern business models such as subscription or pay per use,” Wang says.

As AI algorithms ingest more data, small hospitals and their branches will benefit equally from the learning model’s improved accuracy and predictions.

So far, AEWIN uses its system for lung cancer, but use cases are likely to broaden. “Lung nodes can achieve high accuracy of detection with current technology. We take this as a start and look forward to further potential applications,” Shao says. “With the cooperation of various hospitals all over the world, smart healthcare can rapidly evolve. We expect to see many new applications in the next few years.”

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Accelerating the Developer Journey: AI at the Edge

Building AI applications to run at the edge can seem like a formidable undertaking. But with the right development tools and platforms like the Intel® OpenVINO Toolkit 2022.1, it is easy to get started, streamline your effort, and deploy real-life solutions.

For a deep dive into the operational and business value of edge AI, I spoke to Adam Burns, Vice President of OpenVINO Developer Tools in Intel’s Network and Edge Group. Burns talks about the strategy in bringing new capabilities to OpenVINO 2022.1 and making it easier for developers to focus on building their applications. Our conversation covered everything from where to get started to solving the biggest AI developer challenges.

Let’s start by discussing what developers should know about building edge AI solutions.

At the end of the day, the edge is where operational data is generated. It’s in a store or restaurant where you’re trying to optimize the shopper or the diner experience. In medical imaging, it’s where an X-ray is taken. Or take a factory that wants to increase yields and manufacturing efficiencies.

Then you need to look at how AI marries up with an existing application. For example, in a factory, you’ve got a machine that’s running some part of the operation on the assembly line. You can use the data coming from that application to do visual inspection and ensure the quality of goods. Or you can use audio and data-based machine learning to monitor machine health and prevent failures. It’s this combination of how you use the data for the application and then use it to augment what the system is doing.

And the edge is very diverse. You have different sizes of machines, costs, and reliability expectations. So when we think about edge AI, we’re thinking about how we address a diversity of applications, form factors, and customer needs.

What was the strategy and thinking behind the OpenVINO 2022.1 release?

When we first launched OpenVINO, many of the applications for edge AI were focused on computer vision.

Since then, we’ve been working with and listening to hundreds of thousands of developers. There are three main things that we’ve incorporated into this release.

First and foremost is the focus on developer ease of use. There are millions of developers that use standard AI frameworks like PyTorch, TensorFlow, or PaddlePaddle, and we wanted to make it easier. For example, somebody is taking a standard model out of these frameworks and wants to convert it for use on a diverse set of platforms. We’ve streamlined and updated our API to be very similar to those frameworks and very familiar for developers.

When we think about #EdgeAI, we’re thinking about how we address a diversity of applications, form factors, and customer needs. @Inteliot via @insightdottech

Second, we have a broad set of models and applications at the edge. It could be audio, it could be natural language processing (NLP), or it could be computer vision. In OpenVINO 2022.1, there is a lot of emphasis on enabling these use cases, and really enhancing the performance across these diverse systems.

The third is automation. We want developers to be able to focus on building their application on whatever device or environment they choose. Rather than requiring a lot of parameters to really tweak and get best performance, OpenVINO 2022.1 auto-detects what kind of platform you’re on, what type of model you’re using, and determines the best setup for that system. This makes it very easy for developers to deploy across a wide range of systems without having to have optimization expertise.

Can you tell me more about how audio and NLP AI are being used today?

Let’s start with a client example and then we’ll go to edge. A lot of people are using video conferencing platforms today. In the background, those platforms are processing what we say so we can do closed captioning for clarity and assistance where needed. That’s the natural language processing.

They also do noise suppression. If somebody comes to work on my house and has a blower on high speed behind me, the video conferencing platform is going to do its best as it can to capture my voice and reduce those other aspects.

When we look at the edge, similar types of workloads are critical. Automating ordering in dining situations and retail stores has been a big focus. NLP can be used to process orders coming into a drive-through to make sure they’re getting orders accurate and then displaying those back to the customer.

Audio processing can be used in a factory to gauge machine health, especially in motors and drives and things like that. You can put an audio signal on many types of equipment and there’ll be certain audio signatures that can be detected, which are indicative of failure or anomalies.

So you start to get more defects noticed through computer vision while at the same time your audio signature is picking up an abnormality in a motor. That’s a sign to flag a potential repair or initiate some type of a corrective action.

What are the biggest challenges developers face when building AI apps today?

One of the chief problems is that a lot of the research around AI and the existing models are built in a cloud environment where you have almost unlimited compute. Now at the edge, a lot of developers are working in constrained environments.

How do you take applications and capabilities out of research and get them into deployment? One of the things we’re doing is making it efficient and economical enough to run on the edge, so the value you get out of deploying is greater than the cost of deploying. OpenVINO gives developers the ability to leverage some of the most advanced AI applications but in a way that’s efficient enough to really deploy on the edge.

For developers who want to learn more and do more, where can they get started?

The place to start is openvino.ai. You’ll find getting-started guides that walk through model optimization, access to Jupyter notebooks, different types of applications, and code samples. And, of course, you can download OpenVINO for free.

For those who want to do work in a hosted environment or want to prototype across different types of Intel systems, we have an IoT DevCloud. In minutes you can log in and have a session running with OpenVINO. There’s the same access to those notebooks and code samples that allow people to do something right away, whether it’s to optimize a network or run a specific type of application on their data sets. There’s access to a bunch of different model types and applications, and people can use their own sample data as well.

And finally, we have the Edge AI Certification Program. This is more about teaching the application of AI at the edge, while at the same time you’re using OpenVINO as a tool.

I think all three of those are great places to get started depending on where you are in your development journey.

Is there anything else would you like to add?

There are so many applications where data’s being generated at the edge. And that data can drive savings, or customer experience, or operational efficiency from combining AI. OpenVINO is all about taking what’s already working on the edge from an operational perspective and enhancing it with AI.

A lot of AI today, especially in the cloud, is deployed on expensive accelerators. In many cases, these solutions are too hot or too expensive. OpenVINO helps solve that problem by tuning these AI workloads and these AI networks to run efficiently on standard off-the-shelf Intel CPUs, which today have great AI performance and are ubiquitous in deployment around the world—meaning there is no need to buy something extra. That opens a whole range of new opportunities where you couldn’t deploy these applications a few years ago because they just weren’t efficient enough or they just weren’t cost-effective enough.

We’re trying to bring more developers to the edge with OpenVINO and really make sure there’s as much investment in these technologies that we think are incredibly valuable in terms of customer experience, saving money, improving manufacturing, and getting more goods out there.

From that standpoint, we’re trying to solve two things with OpenVINO. One is making it economical enough to deploy. And then really democratizing AI from by making it more accessible from a developer perspective, bringing more developers into the fold who can create and deploy these applications.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Smart-Building Tech Enhances the Education Environment

One lesson learned from the pandemic is the importance of in-person learning. K-12 students get more than education at school. They get vital resources and relationships that help them thrive. Everyone—administrators, teachers, parents, and students—want their campuses to provide a safe and healthy atmosphere.

With a broad set of needs—from improving indoor air quality to addressing physical security—school administrators need to make prudent investments to achieve their goals.

Through its expertise in smart-building technologies, Honeywell International Inc., a global provider of technology solutions works with schools to tackle these challenges.

Schools often don’t have the budget for major capital expenditures. It’s essential for administrators to be aware of funding sources for upgrades and to deploy solutions that leverage existing infrastructure. Honeywell’s smart-building platforms can work with systems already in place, for both cost and sustainability.

#SmartBuilding platforms can help school districts maintain physical security while regulating air-quality control, managing energy, heating and cooling systems—each through #AI-powered #video, sound detectors, and real-time analytics. @honeywell via @insightdottech

AI and Computer Vision Expand School Health to School Safety

The company helps schools implement a broad range of solutions based on IoT technologies from AI and computer vision at the edge, to centralized management in the cloud. For example, its smart-building platforms can help school districts maintain physical security while regulating air-quality control, managing energy, heating and cooling systems—each through AI-powered video, sound detectors, and real-time analytics.

The system tracks KPIs and issues alarms to a dashboard that includes a view of floorplans and equipment, allowing predetermined workflows to make instantaneous changes that align with emergency protocols. It can adjust temperatures, manage lighting, expel bad air, pinpoint security incidents, and turn HVAC systems on and off at preset times.

Smart-building sensors continuously collect data on environmental and situational conditions. Data is fed into an analytics engine that triggers automated adjustments to support various goals—reduce energy consumption, improve ventilation, and enable fast response to security threats.

“That is how school administrators are prioritizing their solutions right now,” says Bruce Montgomery, Sr. Strategic Accounts for SLED & FED Markets at Honeywell. “They’re using preventive technologies, such as software, that can take feed from existing camera infrastructure, or can be retrofitted on an access control system. This can then help them keep their staff, visitors, and students safe and healthy within the building.”

Diverse Systems Work Together

While most schools use older systems, open solutions are essential. Proprietary hardware and software can pigeonhole them into a specific technology, which may become outdated or obsolete. That’s one reason why Honeywell uses the Mercury platform—an open protocol supported by more than 20 vendors.

“Our goal is to make sure we can continue to use and improve their overall systems without having to purchase new hardware,” says Montgomery. “As it turns out, a majority of schools use or are navigating to open Mercury Hardware.”

The platform enables Honeywell to integrate a variety of building control and air quality systems. And on the security side, it supports and integrates with Honeywell’s and other vendors’ access and video—integrating a variety of systems into a single interface.

Intel® is a key partner in making this possible with high-performance processing at the edge and pre-built AI algorithms. For instance, Intel processors power Honeywell’s NVR rendering and decompression for video systems in security use cases.

“I’ve been using security and video and access for many years,” says Montgomery. “Never have I seen a higher level of performance in our video and processing technology than we are seeing right now with Intel.”

The company is also putting a strong focus on its Forge connected platform, which applies AI-based analytics to smart-building and security management systems. Such developments will help smart buildings get smarter and optimize safety. And that allows schools better control in running buildings that affect health and security while driving toward sustainability and efficiency.

Secure Buildings Are Healthy Buildings

More than ever, IoT technologies make integration of campus security, constituent safety, and healthier environments possible. Secure buildings advance healthier buildings.

“Customers are asking, ‘How do I manage building controls and HVAC in relationship to my security?’ We’re starting to see them really get some synergy together, and we’re joining those discussions right now,” says Montgomery. “And we see a whole new set of efficiencies when you start combining building controls across the entire set of campus needs.”

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Building a Sustainable Supply Chain

Related Content

To learn more about sustainability, read IoT Paves the Way Toward Smart Sustainability and this report on IoT and Sustainability.

Transcript

Corporate Participants

Christina Cardoza
insight.tech – Associate Editorial Director

Chris Cutshaw
C.H. Robinson – Director of Commercial and Product Strategy for Visibility Products

Jan Hellgren
VINCI Energies, Sweden – Director of Innovation

Presentation

(On screen: insight.tech logo intro slide introducing the webinar topic and panelists)

Christina Cardoza: Hello and welcome to the webinar on Building a Sustainable Supply Chain. I’m your moderator, Christina Cardoza, Associate Editorial Director at insight.tech. And here to talk more about this topic, we have a panel of expert guests from Axians and C.H. Robinson.

So, before we jump into our conversation, let’s get to know our guests. Jan, I’ll start with you. Can you tell us a little bit more about yourself and your role at Axians?

Jan Hellgren: Yes, my name is Jan Hellgren. So, I’m Swedish, and sitting in Stockholm. And Axians is part of the VINCI Group, VINCI Energies, and I’m the Director of Innovation at VINCI Energies. So, I’m working out of our Innovation Center here in Stockholm, it’s called the Hive – Innovations for Good. And what we basically are doing is that we try to find solutions that are good for our planet, and using IT and OT.

Christina Cardoza: And Chris, welcome to the webinar. Can you tell us a little bit more about yourself and C.H. Robinson?

Chris Cutshaw: Yes, thanks, Christina, excited to talk about sustainable supply chains today. My name is Chris Cutshaw, I’m Director of Commercial and Product Strategy at C.H. Robinson.

C.H. Robinson is a $17 billion logistics and supply chain solutions company. We help companies all over the world build automated processes and sustainable solutions within their supply chains. My focus is around our managed service division, which helps roll out technology and global control tower solutions for large companies that move products all over the world. I’m based in Seattle, been with the company 11 years, look forward to the conversation today.

Christina Cardoza: Great. Yes, can’t wait to get into a little bit of how C.H. Robinson is helping customers build those sustainable supply chains.

Let’s take a quick look at our agenda before we get started.

(On screen: slide outlining the webinar’s agenda with image of green plant)

Today, our guests are going to explore why sustainability matters and how businesses can begin down a sustainable path. What that looks like. How to be successful in this area. Sort of the technology partners in this ecosystem. Who can help and what are the right tools and technologies to get you there? And then lastly, we’re going to look at what these efforts are working towards in the future.

So, let’s get started.

(On screen: The State of Sustainability with  illustration of solar panels, windmills, and sustainable solutions in a city)

Here at insight.tech, we’ve been seeing a lot of organizations really start setting aggressive sustainability goals over the last couple of years with so many trying to reach net zero in just a few years.

So, Jan, I wanted to throw this first question to you. What has been behind this move and this adoption to become more sustainable?

Jan Hellgren: More than the obvious that the climate is changing and there are very few people that question that these days. But I will say that what happened in Paris a couple of years ago for the Paris Treaty, we know that legislation is coming. We are going to be forced to change. But then many companies that understand that this is coming understands that if you are really early in this, this can become something that can help your business.

So, either you are in the back seat waiting for somebody to tell you what to do, or you are in the front seat and you can make money out of it.

Christina Cardoza: Great, and we’ll dive a little bit into some of those regulations that you just mentioned. But I’m curious, Jan, if you can expand a little bit on where we are with sustainable businesses today. Like I mentioned, a lot of them are setting aggressive sustainability goals, but how realistic do you think those goals have been and how strong have those efforts been towards those goals?

Jan Hellgren: The goals are realistic in the sense that if we don’t meet them, we’re not going to – it’s going to go really, really bad for us. So, I would like to mention that many organizations now are adopting this three-pillar economy with the PPP. And like in VINCI, VINCI Energies, we have this every month and every quarter, we’re going through our profitable goals, our year result, and we are measuring that and we have the KPIs for that.

On the people side, we now have – since a couple of years – we have KPIs of following safety and the health of our staff. But if this is going to – if they are going to be healthy over time, and we are going to be a profitable company over time, then we need to address the planet part also.

And the planet “P”, addressing CO2e, the CO2 equivalence, if we address that in a smart way, the planet “P” could actually help the profit “P”. And I think that many companies are really understanding this. And I would say that it’s very common here in the Nordics that companies are addressing this heavily. And we are a French company, and VINCI has really aggressive goals in terms of reaching this.

I didn’t mention that many don’t know what VINCI is, but VINCI is a 260,000-employee company, so it’s a really big organization and takes this really seriously.

(On screen: Going green slide with image of a business team holding a plant together)

Christina Cardoza: Great, and Chris, you mentioned in your intro you guys are really working to help businesses set those goals, reach those goals. So, I’m wondering how you’re seeing them get started, what this sustainable journey looks like for a business.

Chris Cutshaw: Yes, definitely building on what Jan was just talking about, this is becoming not only just an incredibly important topic for the globe, but companies are being really forced, both by public perception and by policy, to figure out how they can become sustainable.

So, from a carbon perspective, which is a really critical aspect in sustainability, transportation or supply chain activities outside of manufacturing is going to be the second biggest carbon-producing aspect of companies’ businesses.

So, first and foremost, they need to understand a baseline what’s happening within their supply chain to understand where they can actually focus on improving and eliminating the amount of carbon that they’re producing.

So, we help companies by baselining their modes of transportation. When they’re shifting maybe to airfreight to accelerate something, understanding the impact when they’re doing that. Obviously, it’s important to get product into the market when customers need to, but there’s also tradeoffs that you need to make. Carrying more inventory, understanding where to, let’s say, be more sustainable within your business and make lower impact decisions. So, we baseline their supply chain, and then from a transactional perspective, help them choose in-moment when you have various different metrics to consider like time, cost. And now, we’re actually inputting carbon output into that equation, so they can make a balanced decision.

And then, ultimately, reviewing how you moved against your baseline and making sure that you’re accelerating towards that goal, whatever that may be to improve your sustainability. And that’s the carbon aspect of it.

But we’re also helping companies just have a sustainable process. If you look at right now the talent and labor shortage, especially within supply chain talent is growing exponentially. And so, there’s more jobs than people to fill them. And really, if we help companies build a process where people want to stay, work, and grow, that’s also a part of the sustainability concept that companies are thinking about. How can my supply chain run and accelerate on its own without having to, let’s say, fire hop, or jump from fire-to-fire like we’ve seen many companies do through the pandemic? And then with all these supply constraints and port issues, how can we build a sustainable process through technology, through automation to help them achieve their goals both from keeping talent and retaining talent, but also reducing the amount of impact that they have on the environment?

Christina Cardoza: There is no surprise that there’s a lot that goes into being sustainable from a carbon emissions standpoint to just within your business how it runs and operates. On top of that, there’s all of these other challenges like you both have mentioned, governmental regulations, global green energy initiatives.

So, Jan, I’m wondering if you can expand a little bit on the challenges businesses face when they’re trying to become more sustainable. And how some of these government regulations and global initiatives are putting pressure on them even more.

Jan Hellgren: Thanks. And there are so many huge challenges. I wouldn’t say that I’m aware of all these challenges. But a few of them are that if you have – let’s say that you have a goal for 2030, you can’t wait until 2029 to do your changes. You need to break this down into chunkable pieces, so you can act on it right now.

You also – to be able to then make these changes, you need your organization to be aware of what is happening, and you need to be able to engage them. I just have an example from my driving yesterday.

I drove to another city here, it’s about 500 kilometers away from here. The last time I drove, I had gasoline usage that was about 40% higher than this time where I really tried to be careful with my right foot. So, you need your organization to be engaged in doing these changes. And how do you do that? Well, by creating awareness.

And then how will you keep track? Just as Chris said, if you have your baseline, how do you keep track of where you are towards that baseline? I would say that what is happening now also when the companies are starting to address these things is that they… to keep track, they are doing a lot of manual work. So, they are reading invoices, and they convert that into Excel files, and then import Excel files into your ERP system. And it’s creating an enormous amount of administrative burden.

So, there are so many other challenges also, but that would be a few of them.

Christina Cardoza: Great, and I want to touch on something that Chris just mentioned earlier that it’s not only about reducing carbon emissions. There’s really a lot that goes on within the business, within your own workforce that can help it become more sustainable.

(On screen: The Role of the Supply Chain and illustration of  supply chain elements: vehicles, trees, planes, and boats)

So, Chris, can you start off by talking about what does it mean to have a sustainable supply chain, and the role the supply chain plays in these sustainability initiatives?

Chris Cutshaw: Yes, I think as I mentioned before, supply chains are really front and center if you think about manufacturing or whatever your business is to moving goods into market. When you talk about carbon output, that’s really a huge driver.

So, supply chains are in the bullseye, for good or for worse, on how to document, figure out, and identify, one, their output currently, and strategies that they can take to mitigate.

And so, the role of the supply chain in becoming sustainable is also being agile, flexible, connected, and visible. So, really, companies right now are on a journey. So, they’re trying to start by connect all their partners and all of their movements that are happening within their supply chain to gain visibility. The reason you want to gain visibility is to give your customers an understanding of what’s happening. But also for you and your internal systems to be able to understand where everything is at in a complex global environment, and make really critical decisions, and prioritize key metrics in your decision-making process transactionally.

So, you don’t review a quarter and see how you performed and try to change for next quarter. In the moment, when you’re making those critical decisions, are you taking every factor into consideration?

So, companies are trying to connect right now. I’d say the majority of companies right now are just trying to understand what’s happening across the entirety of their supply chain. Then they’re trying to move to more of a predictive phase.

So, can I avoid disruption? Can I see what’s around the corner? Can I identify mitigating strategies to be more resilient? So, if I don’t have to expedite a bunch of airfreight because I don’t have any other alternate sources of supply, if I build a resilient strategy where maybe one of my suppliers goes down, or becomes embargoed by a political – a new geopolitical event, I can already source and have strategies to back up that supply, then I don’t have to go and accelerate airfreight, which is going to produce carbon, and really allow my employees to jump from issue to issue. And I have a plan in place to mitigate that.

So, can I become predictive? Can I get connected? Understand what potentially is going to happen. Then what companies are trying to do is to move into an orchestration where every system, every division, every, let’s say, silo within the company are all reading from the same sheet of music. So, you don’t have logistics yelling at manufacturing, or planning accelerating product through logistics and making them expedite freight. But you’re all saying, “Here are our common metrics, here are our common data assets and common data model that we’re all reading from, we’re all acting from”. And then, eventually, they want to move to a phase where they’re cognitive, where you don’t have humans making decisions, but you have your systems and you really are extending productivity, allowing one person to do 10x more than they’re doing right now. And that’s really the role, and what supply chain leaders are thinking about to take from where we’re at now in a very transactional, manual environment to become cognitive, to become very connected. And then take into consideration priorities that you need to consider every time you’re making a decision. And that includes sustainability, that includes carbon. And you’re taking that decentralized approach away from all the decision-makers that are out there in your potential supply chain. And every transaction you’re making the right decision based on your organizational priorities.

That’s really what we see companies trying to do right now. And we’re helping them with visibility technology. We’re helping them with platforms that allow them to connect to that. And also, connecting with IoT and other types of sensors to understand where everything’s at in a complex global environment.

Christina Cardoza: So, it sounds like just by streamlining your operations, connecting your systems, and understanding everything, what’s going on, looking at the right metrics is helping towards this overall goal of being more green, reaching these net-zero goals.

So, how important or where does the supply chain come in some of these net-zero goals? Is this sort of the first thing businesses should be looking at when trying to reduce their carbon emissions?

Chris Cutshaw: If your business does manufacturing, that’s going to have probably the biggest output or carbon contribution that your company is making. Next up is how you move and facilitate movements of product throughout the world.

And when you’re in those really critical tight decisions, if… and the reason I was talking about becoming connected is a lot of times people are measuring last quarter and seeing how well they did and saying, “Okay, let’s implement these strategies”. Well, when you get into a pinch and you need to make some really quick decisions and maybe a line’s going to be down for production, or you’re not going to meet launch date for a critical launch, you’re not always going to consider environmental impacts in those decisions.

So, if you can actually input those algorithms or that type of decision-making into your process and maybe carry more inventory, maybe be more of a – building in local regions with build-to-order type supply chain instead of just stocking and moving from that perspective, that’s going to help you be more agile in these moments. And also, allow you to align the organization towards a north star.

You don’t have people maybe that are trying to hit a certain P&L or trying to achieve a certain in-stock ratio, but we’re all making a decision based on organizational priorities. And if carbon reduction and your net-zero goal, or whatever that goal is, you need to make sure that you’re considering that every time you make a transportation decision.

Obviously, using ocean and using ground transportation that’s electrified, potentially, or not using a lot of airfreight or direct truckload, that’s going to allow you to reduce your burden and reduce the output of carbon that you’re making. So, we help companies transform their supply chains to say, “Here’s the cost risk and benefit analysis of maybe taking more inventory within your supply chain, shifting to more environmental-friendly modes of transport”. And we help them make that analysis to find where it’s beneficial and then how they can actually do that over time to achieve their goal.

Christina Cardoza: So, it sounds like there is a lot of moving pieces in all of this. Making sure everything is streamlined and correct and moving on the factory floor, if you’re in the manufacturing industry, to ensuring you have the right inventory available, to making sure the right amount is being deployed and delivered on the road.

(On screen: Why Technology Matters slide with image of trees and data points on top of it)

So, I want to talk a little bit into the technology that goes into this. Because we’re talking about a lot of systems connected to each other, and a lot of data and metrics that you need to be collecting. And a lot of this is too much for a human to understand on their own. And since we’re talking about all these systems being connected, Jan, I’m wondering if you can tell us a little bit about the role of the Internet of Things in these green efforts.

Jan Hellgren: So, the Internet of Things is as you say, it’s just a technology, it doesn’t bring any value in itself. But just as Chris mentioned, you can use it for really solving a lot of your challenges.

So, as I mentioned before, one of the big challenges for organizations is this big burden of manual work, to be able to enter this data into your systems. If you’re going to keep track, if you’re going to know your CO2 footprint at any given time, then you need this data. And the interesting thing is that a lot of this data is already digitized. You have it in your existing systems. It’s just that these existing systems are not connected to the internet or to your endpoint.

So, that might be the case. And some of the data is not yet digitized, but then you have all kinds of sensors for measuring all kinds of gases, and all kinds of emissions. And then in both cases, IoT could really be the carrier of bringing this data to your endpoint.

So, you put your edge gateway where you have the data, and you fetch the data either from existing systems or from sensors. And if you combine that with API integration towards other sources of external parties, then you can really have the data that you need to keep track over time.

Christina Cardoza: And I know capturing and tracking some of this data is a big part of what Axians and VINCI Energies does. So, can you expand a little bit on the ways that you guys help businesses make sense of all this data and how, in turn, that’s helping them become more sustainable?

Jan Hellgren: Yes, we don’t have time to explain that in detail, of course. But I can say that what we normally do is that we put, what you would call, an edge gateway, normally it’s an Intel NUC, and we put that onsite and let that gateway communicate with existing systems. It could be an existing SCADA system or existing PLC, or whatever sensor which is out there already. Or we put other IoT devices there also, bringing all this data into the edge device, where you then combine this data and convert it into whatever protocol you would need in the back end.

And then we send this to an IoT hub, normally Microsoft Azure. And on the back end of that IoT hub, we are then using this GreenEdge Platform that we created for calculating, for aggregating, and presenting the CO2 footprint of your company.

We are addressing the full Scope 1, Scope 2 and, of course, not all of Scope 3 yet, but a really important part of Scope 3 is already addressed also.

Christina Cardoza: Perfect. And Chris, you mentioned the stages of a sustainable supply chain and touched upon some of the technologies or ways C.H. Robinson is helping businesses become successful on this journey. But what other components or technologies do you think is necessary to really be successful?

Chris Cutshaw: Yes, so similar to Jan’s, we also leverage IoT in our partnership with Intel and Microsoft to make that happen.

(On screen: Chris displays his edge gateway device)

I actually have a little gateway device here, it’s about the size of two iPhones. This connects via cellular and GPS technology. So, you can put this on products on a multimodal movement, so often our goods today, especially in North America are being imported from Asia or other locations. So, that means it’s going on a truck. That truck is going to go into a port. That container is going to get put onto a vessel. The vessel is going to go to the destination port. It’s going to be put on a rail. That rail is going to go to a destination rail location. It’s going to be pulled and delivered and dropped off at a distribution center.

So, understanding all of the points in the journey and then each of those modes or movements are actually producing some sort of carbon as a part of that. Understanding even what vessel it’s on. So, I know is it a newer vessel that’s better at fuel emissions? Is it an older vessel that really is actually contributing worse to my carbon footprint? That’s really the true way you can measure your current output. You can definitely make assumptions and say, on average, this is generally what it takes. But to truly identify what’s happening, you need to understand every step of the journey so that you can eliminate or leverage partners that are going to really help you achieve your carbon goals.

And then we have products like our visibility technology, Navisphere Vision. Navisphere is a platform we built proprietarily at C.H. Robinson, over 115 years of development, and we’ve been a company helping companies move supply all over the world. So, we take this visibility technology inputs from a whole bunch of data elements like sensors, like vessels, ports, terminals to combine that data in real-time, show them where their inventory is at, show them where their freight-in-motion is at, and allow them to do more reporting and analytics on on-time performance, carbon, inventory in full, things of that nature.

And we also have rolled out a product across C.H. Robinson called Emissions IQ. This is really the first GLEC-certified, which is an industry body that’s internationally recognized on reporting and understanding carbon output by mode of transport. So, we have a GLEC-certified dashboard that companies who are leveraging C.H. Robinson can quickly with a few simple setup items, can understand from a transportation perspective what is their current baseline as it’s moving through our systems, and help them plan and identify areas of opportunity where maybe they don’t need to be using airfreight, maybe they don’t need to be expediting parcels and there’s consolidation opportunities.

So, if I send 20 shipments out from one place to another place, can I consolidate those into a much more heavier weight, heavier dense type movement, which is going to improve the utilization of my transportation?

So, we use IoT technology, we use visibility platforms, we connect that data through streaming architecture and API architecture. And then we baseline and help companies understand their analytics in real-time. One of those components is carbon output and baselining their carbon emissions.

Christina Cardoza: Now, this all sounds great in theory, but I’m curious of how this actually looks like in practice.

(On screen: Sustainable Businesses in the Real World slide with illustration of various metrics and reports)

So, Chris, I’m wondering if you have any examples of customers that you’ve helped, the challenges they face, how you stepped in, and how they really utilized the technology from C.H. Robinson to meet their sustainable supply chain goals. And you don’t have to name names if you can’t, but any examples you can provide would be great.

Chris Cutshaw: Yes, well, one we can name that I just did in the last answer that we’ve publicly announced our partnership is that of Microsoft.

So, Microsoft has made an ambitious goal. I believe by 2030 they want to be carbon-neutral, and by 2050, they want to be carbon-negative, replacing all of the carbon that they’ve ever produced as a company. Very ambitious goals. And then when that comes down through the organization, Microsoft’s supply chain, again, is in the bullseye saying, “Okay, you guys are a big producer, help us figure out a mitigate strategy”.

So, first and foremost, we help them identify all of these systems and different processes, different decisions that were happening within their global complex network. They service over 100-I-think-70 different countries, shipping 10 to 20,000 SKUs every year. So, understanding and having a platform roll out to be able to track and connect all of those transactions is the foremost step that companies need to take to then be able to change and influence change over time.

So, you need some sort of execution platform, whether it be through your manufacturing or supply chain process that helps you see in real-time, connect to your partners, and make really in-the-moment decisions that are based on your priorities as an organization.

So, we’ve rolled out our global TMS, which is a transportation management system, put it in Azure, which is their cloud-hosted cloud solution, and we host our products in their systems to help build out sustainable processes, track their cargo, which is very prone to potential theft or damage. So, IoT capabilities and understanding what light, temperature, humidity, tilt, shock of every container, every pallet that’s moving in their supply chain. And giving them that live streaming global common data model is an example of how we’ve built this with a real-life customer.

And I would say that doesn’t come with a flip of the switch. Anybody who is going to tell you, “Hey, you just turn this thing on, all your problems are going to go away”. I would say that’s probably a bit of snake oil. This takes iteration. This takes a lot of focus, a lot of buy-in across many different parts of your organization to really influence change. And you need to find partners and technology platforms that are going to allow you to do it that are future-proof, and that grow with you over time.

So, we have a managed service that goes along with our technology that brings people and process, that combines the global technology to help them evolve and transform over time, and stay consistent with what’s happening in the industry and make the right decisions.

So, that would be an example, and we have many others just like that here at C.H. Robinson. We support over 100,000 customers and have about 75,000 carriers that are connected to our platform from all different sizes. So, it’s a journey. You definitely have to make the investment. You have to jump in. You have to iterate. And that’s how we found success in helping companies really build sustainability from a practical sense.

Christina Cardoza: I love how you said it’s not a flip of a switch. I think a lot of times companies get frustrated when they get on these journeys because they don’t see results fast enough. But like you mentioned, it really is a journey and we keep talking about hitting these net-zero goals or these sustainability goals. But is there really an endpoint to this? Once you’ve reached that net-zero goal, is your sustainability effort over, or what happens after that?

Chris Cutshaw: I think we have some big targets to hit. I don’t know that anybody could project out. But I know one thing that we constantly evolve as a people and as humanity. And I would imagine that once we get there that we’re going to find some other targets that we’re going to go after, or we’re going to find some new innovative ways to move and build products.

Potentially, and this could really impact our industry, but is there 3D printing or micro-fulfillment, ability to build and manufacture in-region with very consistent and sustainable processes that are sourcing from the countries in which they’re manufacturing.

So, can we build processes that eliminate redundancy, eliminate complexity, allow us to fulfill customer needs immediately without impacting the environment? So, I think maybe a lot of companies will achieve carbon-neutral by offsetting their carbon, but not eliminating. So, I think we’ll actually want to go after full elimination and how can we move in maybe an electric or nuclear way with many different modes, I think, is really exciting to think about. But I know one thing, we will find some other things to chase after if we achieve this goal.

Christina Cardoza: Absolutely. And Jan, you mentioned the GreenEdge a couple of times. I’m wondering if you can expand on some of the businesses or industries the GreenEdge has been helping with sustainability efforts. And where it comes in on the sustainability journey.

Jan Hellgren: So, GreenEdge is actually a generic solution. So, what it does is that it takes in data from many different sources, converting it into CO2e. And it also converts whatever actions you’re taking for your coming year in terms of getting or letting out less CO2 emissions, or other gases for that sake.

So, what you will have is a baseline that you measure all your emissions against. And the interesting thing is that you will measure it on the lowest level of your company. So, it would be your business unit, and perhaps after that, you will aggregate the data up to your regional unit and up to what we call a Pole, or on a top level. And depending on what role you have in your organization, you would like to see the emissions of whatever you are responsible for. And so, it can be used for ourselves. We have a very, very diverse business. We do it for real estate. We do it in a utilities business. And for industry.

But as I said, since it’s using data from whatever sensor or whatever system, it could be used in almost any vertical.

Christina Cardoza: Now, we mentioned some other big names in this conversation. Chris, you mentioned Intel and Microsoft, and I should mention that the insight.tech program is an Intel-owned publication. But I think it’s clear that these goals are aggressive, there’s a lot that goes into it and no one company can do it alone.

(On screen: The Power of Partnerships slide with image of a human hand about to shake hands with an illustrated hand made out of green plants.)

So, Chris, I’m wondering how else you’ve been working with Intel and Microsoft and other organizations, and what has been the value of the partnership ecosystem to meet sustainability goals.

Chris Cutshaw: Yes, starting – I think the comment you made there is you cannot do this on your own, neither us as a provider nor our shippers or customers that are moving freight. We need to all collaborate and collectively build solutions that are going to help us achieve these goals.

And how we partnered with Intel has been really on a technology front. So, they’ve helped us manufacture and build IoT devices, deploy those devices across a variety of different customers.

Now, one way we’re using them is to track what cargo ships they’re on, what trucks and rails that they’re on. We also are able to track if anybody opens a container when they shouldn’t be, or if there’s potential damage. Or let’s say we’re shipping in the winter, we’re shipping berries and raspberries from South America into the US, we want to make sure those maintain a certain temperature and they don’t get spoiled and damaged. That’s sustainable too that we don’t throw away a bunch of food that we don’t have to. So, IoT and Intel has helped us build those technologies.

And other ideas on partnership and things we’re thinking about. So, in our industry, we have this term a lot of companies that we compete with, we’re really frenemies. In some way, we are competing but we’re also helping a customer build solutions. So, can we have collaborative solutions that go across our four walls as a company, but keeping in mind the priority of us as a civilization that we need to be doing the right thing by the planet, we need to be doing the right thing by taking advantage of the data and the technology that’s out there, so we can create these really innovative solutions. And we’re not always focused, necessarily, on our bottom line, but a collective output.

Now, that’s a huge statement, and I would say there’s a lot of other priorities that get in the way of that. But the more we can keep that into consideration as governments are thinking about how they foster and almost enforce that collaboration, I think that’s a big push on partnerships and finding the right folks within our industry that can help us achieve these goals. And collectively, can we help shippers that are really moving products, that are manufacturing products achieve these goals?

And just to close on that, Intel and Microsoft have been huge partners from a technology side to help us deliver that.

Christina Cardoza: I love that point you made about frenemies. You’re absolutely right. Everybody is sort of competing, but at the same time, there’s a lot of different pieces that go into all of this. And different businesses or organizations might have more knowledge or expertise or software in one area than another. So, it really takes a team to put it all together.

Jan, can you expand on some of the technology partnerships that Axians and VINCI is working with to make this happen?

Jan Hellgren: Well, on the OT side, there are a lot of them. On the IT side, I would say the main partners are Intel and Microsoft. So, we’re using Intel hardware, just as Chris said, for gathering the data through an IoT gateway. We have the microservices platform that makes it possible to do any kind of security management on the Intel NUC and any kind of updates. Having an automatic way of managing a big quantity of gateways at a time. And we are sending the data to the Microsoft Azure IoT hub.

And the interesting thing there is with – I know that all of you guys know about Microsoft Azure, but the ones that are listening to this might not be. But you have an extremely big toolbox of solutions for creating any kind of value with this data that you get for sustainability. And these toolboxes we have been using for creating our solutions, but we are also helping organizations to make use of this toolbox. We have Microsoft Azure MVPs and architects that we use to help our customers to meet their goals, using our solution or creating their solutions on their own.

Christina Cardoza: Great points. And I know – we already mentioned how this is a journey and once we hit our goals, which is aggressive and way out in the future, there’ll be even more goals to reach, or even new technologies to apply, different ways to be doing things.

(On screen: Securing a Sustainable Future slide with image of solar panels and windmills)

But I’m wondering if we can stop and look into our crystal balls a little bit at what we can expect in the near-term future. What is this all working towards and how far do you think our reality of becoming more sustainable and meeting these goals are? Chris, I’ll start with you.

Chris Cutshaw: Yes, I think, in our industry, in transportation – just look at North America. So, the average truck fleet or a company that owns a truck is about one to five trucks. And so, you have a crazy large industry that’s being run by a bunch of micro companies that are making decisions in the moment. So, I think the ability to electrify and take different solutions to those different smaller companies to allow us on every shipment across North America, and start there and then move into other areas of the world as that technology becomes available. Can we replace those fleets with more sustainable options?

I think another interesting thing is a lot of our imports into the US, from an ocean perspective, are moving on vessels. And there’s a lot of cool initiatives, and I’ve seen some really interesting concepts of huge, large sailboats that are drafting and not using actual engine power to go and move forwards.

So, if you really can have an electric vessel or a sailboat, a large sailboat that’s moving containers into the US, that’s being picked up by an electric truck and delivering it, you can eliminate carbon on an international movement, in our current supply chains.

And then I think another thing that we’re thinking about as an industry and governments are thinking about, especially coming out of this pandemic, is how can we be sustainable within our own region and not rely on international trade as much?

Now, that won’t necessarily be the best thing for our company. But it will be the best thing for the world if as a country, as an international body, can we find a way to manufacture, procure, and deliver sustainably within region and not have to rely on moving things across the world in an elongated amount of time. So, that’s going to speed up transit. That’s going to reduce carbon.

So, a lot of those things are coming to fruition, and there’s going to be a lot of automation. So, autonomous vehicles are not too far down the road. I’d say in the next five years, we’re going to see the middle mile taken out of supply chains from a driver perspective. So, can we focus that talent elsewhere? And really, that’s going to allow us to be more sustainable, to run more consistent networks and not have to see a spike in expedites or a spike in cost that we normally see in the different cycles that we operate in within supply chain.

And I also think the ability to process, consume, and predict the amount of data that we’re able to intake now is growing exponentially day over day. So, you’re going to be able, as a company, to receive a lot more information, make more real-time decisions, and allow your people to be more productive and feel more empowered within their roles.

As I mentioned earlier, I feel that with some of the algorithms, with some of the capabilities that are coming to bear within our industry, people may become 10x more productive, and so we don’t have to grow and build these large teams to get things done. But we can sustain with our current size as we become more automated and more capable with some of the solutions that we’re rolling out.

So, all of those things are – I like to see the glass as half full. We have a lot of challenges on the horizon but if we can come together as different organizations and think about the best way to solve some of these challenges, I feel very confident that the future is bright for us as an industry.

Christina Cardoza: Great. I want to repeat one of the points you made, which was a lot of these things – some of these things that businesses are doing may not benefit the company the most, but it is going to benefit the world and making it better. And that’s really what’s behind all of these goals and initiatives. So, I love that you said that.

Jan, is there anything you want to expand on what a sustainable future looks like for Axians and VINCI?

Jan Hellgren: Yes, so I just want to thank Chris for the half-full-glass, because looking into the crystal ball right now, it’s not very pleasant. The emissions are increasing not decreasing, so it’s kind of scary.

I would… I think it’s not about technology. A lot of the technology that we already have would help us a lot in achieving the goals. But this is – it’s a question of do or die. Businesses have to start changing right now, and we need to keep track in doing this. And when I say do or die, for our company, if we’re not addressing the sustainability issue, we will probably not be competitive in the very near future, and we would, as a company, die. But if we are not, as a society, addressing this really, really fast, then it’s not looking good at all.

So, thank you, Chris, I really liked what you were saying. And I have actually the same opinion as you have.

Christina Cardoza: Great. Well, unfortunately, we are running out of time, and I know we covered a lot and there’s still so much more that we could cover.

So, before we go, I just want to throw it back to each of you for any final key thoughts or takeaways you want to leave attendees with today.

Jan, I’ll throw this one to you first.

Jan Hellgren: Yes, just what I’ve already said. If we address this fast, we have a really, really good competitive advantage towards whomever we are competing with.

I just want to mention an example from Sweden that I heard just last week that we have a company that is called Green Steel, so they are using hydrogen to create steel. They haven’t produced not one kilo of steel yet, I think, but they already have a 10 billion order book.

So, if you’re addressing this really fast, you have a competitive advantage and we should use that.

Christina Cardoza: And Chris, any final key thoughts or takeaway?

Chris Cutshaw: Yes, I appreciate the time today, nice talking with you, Jan, and Christina, thanks for moderating.

I would say that, really, as a company if those that are listening to this want to learn more about what we can offer and how we help identify and baseline and produce solutions that can become more sustainable within your business or supply chain or transportation needs, please reach out. chrobinson.com. You can ask to talk to an expert. We’re happy to walk through what we’re doing now and just be even more consultative in how we think about the future and how we think about sustainable practices within our industry.

So, thanks for the time and I look forward to what we can achieve together.

Jan Hellgren: And I would like to add the same as you were saying, Chris. That goes for Axians and VINCI Energies. Whomever would like to contact me, you can contact me personally and I will address the right person to talk to.

Christina Cardoza: Great. Well, with that, I just want to thank you both again for joining the webinar today and for your insightful conversation.

(On screen: Thank you slide)

I also want to thank our audience for listening today. If you’d like to learn more about building a sustainable supply chain, I invite you all to visit insight.tech where we have a wealth of articles and podcasts on the subject.

Until next time, I’m Christina Cardoza with insight.tech.

(On screen: insight.tech logo and thank you animation)

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.