Innovative AI Solutions Unlock Retail Transformations

The past couple of years have brought many challenges to online retailers, delivery services, restaurants, and brick-and-mortar stores—from labor shortages to higher expectations from customers and more. But a wide range of innovative AI capabilities solutions now available across the industry eases some of these pain points for business owners.

With new and advanced capabilities at their hands, in-store and online sales are increasing, customer experience is being enhanced, and operations overall are improving.

Retailers and restaurants alike are embracing technologies to change the way they operate their businesses. For example, you can now walk into a store and not only use self-checkout but have the option of ordering everything from a digital screen and getting it delivered to you at the front of the store. And that’s just the start of it.

CV and AI Speed Up Food Delivery

With the new influx of takeout and delivery orders that restaurants now face today, traditional methods of placing orders is no longer enough. That’s why companies like UdyogYantra Technologies work hard to transform the food delivery space with AI and IoT technologies. For instance, by offering solutions like cloud kitchens, businesses owners can create takeout-only restaurants that automatically do everything from processing the order to preparing each meal with high-quality ingredients. The solution uses multiple cameras, thermal imaging, label scanners, and other sensors, and operates with deep-learning computer vision algorithms developed using the Intel® Distribution of OpenVINO Toolkit. 

Ghost and cloud kitchens help ease delivery demand and decrease wait times for customers. And with the new technologies, ghost kitchen operations are streamlined from sanitation to food preparation.

With these automated processes in place, restaurant owners can see benefit from reduction of costs by minimizing overfilled dishes, food waste in preparation, and customer order rejections.

#Retailers and restaurants alike are embracing #technologies to change the way they operate their businesses. @intel via @insightdottech

In-Store Shopping Made Easier with Omnichannel Experiences

To enhance the customer experience and sales in stores, companies like Screenvend apply digital touchscreens and stockroom robotics solutions designed to modernize traditional brick-and-mortar stores. The need for in-store shopping to parallel the speed and convenience of online shopping has become increasingly evident, but what is the best way to incorporate the digital experience in brick-and-mortar locations? One answer is to replace shelves with interactive displays in conjunction with instant robotic delivery in-store. 

Shoppers have the option of filling their virtual shopping carts in stores and then tapping a prompt to complete the transaction. Once payment is complete, the transaction is fulfilled by retail robots and dispensed through a conveyance at the POS. This process facilitates the in-store shopping experience for the customer, while boosting sales and reducing shrinkage for stores.

Skipping the Checkout Line at AI Smart Stores

And with customer convivence at the top of mind for implementing new technologies, companies like Cloudpick are creating a seamless checkout process with no lines. With labor shortages becoming increasingly evident, some companies are looking to extend businesses hours without solely relying on staffing.

Enter the AI-powered smart store. With a simple download of a mobile application, customers can reap the many benefits of an AI smart store, including fully stocked shelves, flexible hours, and—perhaps best of all—no lines.

AI technology is helping retail fix so many long-standing operational kinks. Moving toward partial or full AI, smart stores will help streamline in-store inventory systems, ultimately flowing through the supply chain more smoothly and eliminating delays and stock shortages. Additionally, these types of stores will help move toward a future retail ecosystem that is fully automated and sustainable.

AI and ML Boost Fashion Fulfillment Speed

AI and CV are even playing a role in the fashion industry, thanks to companies like Aotu.ai, which automate quality inspection to streamline clothing production. The pressures of the fashion industry include precise color matching, flawless fabric, and consistent sizing, but faster order-to-ship times as well.

In such a quick-moving industry, advanced AI is helping speed up the production process with fewer manufacturing errors and faster shipping speeds. And with automated quality inspection enabled by computer vision (CV) and machine learning (ML) technologies, apparel suppliers can offset delays in the fabrication process and increase transparency for all stakeholders.

AI across the retail sector is only beginning. One day we might see retail stores order something for us that we wanted to buy that they were out of stock at one location from another location, and have it sent to our home the same day.

To be part of the change, and start creating innovative AI solutions for the retail industry, check out the Intel® Edge AI Certification Program or take the 30-Day Dev Challenge.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Connecting the Unconnected with Smart-Building Solutions

Bit by bit, one commute at a time, desk workers are shaking the moths from their blazers and A-line skirts and coming back to the office. But not necessarily every day, and not necessarily at the same time as all their coworkers. This new routine—the hybrid work pattern—has big implications for office buildings and the built environment. Having the lights and AC humming 9 to 5, if only a smattering of employees is on-site, is a worrisome waste of resources, from both a financial and an environmental point of view.

But can technology—and AI in particular—convert conventional, disconnected buildings into smart, sustainable buildings? Graeme Jarvis, Vice President of Digital Solutions at Johnson Controls, a smart-building solutions provider; and Sunita Shenoy, Senior Director of Technology Products within the Network and Edge Computing Group at Intel®, think it can. They talk about buildings that help businesses meet aggressive environmental goals; buildings that can leverage legacy infrastructure systems while getting connected; buildings that might actually compete with the ease and convenience of a home office.

How are businesses having to rethink their physical spaces today?

Graeme Jarvis: What does “hybrid work environment” actually mean? I think there are two key components. One is people—be they employees or guests or building owners. The other is the built environment itself, and how it needs to adapt to the new normal around sustainability, workplace experience, and safety and security. The pandemic proved that we can be productive from a home office or on the road, so now the challenge is on the employer’s side to create an appealing hybrid workplace. This gets into key enabling technologies, such as touchless technologies, and having a sense of control over them within the office environment.

We have a solution called OpenBlue Companion, which is an app that allows employees and guests to do hot desking, to book conference rooms, to pretreat those conference rooms based on the number of people expected. There’s also cafeteria integration and parking and transportation integration, so that when a person goes to the office, it’s actually a pleasant experience.

On the building side, the hybrid work environment is really a financial consideration: How do you optimize the footprint you already have? And what are you going to need moving forward to support your employees? That’s where we are right now—companies trying to rationalize what they have and what they will need. There are interdependencies around heating, ventilation, the air-conditioning system, the number of people that happen to be in a building—all of this is interconnected now. Companies are taking lessons learned and starting to apply them to realize their “building of the future”—be it a stadium, be it a port or airport, be it traditional office space.

At Johnson Controls we give clients an assessment around what they have and the efficiency of those solutions, based on the outcomes they’re trying to realize. Then they have an objective: They would like to be more productive. They would like to reduce expenses. They would like to have a safe and sustainable workplace.

“Buildings account for about 40% of the planet’s #CarbonFootprint. If we want to start talking about how to solve #sustainability challenges, the building—the built environment—is top of mind.” – Graeme Jarvis, @johnsoncontrols via @insightdottech

What are the implications of a hybrid work environment?

Sunita Shenoy: As companies ease their employees into hybrid work, they have to make it comfortable for people who are coming into an office by having things like frictionless access. They can do this by using data, AI, and wireless technology to make it easy to improve the quality of the workspace.

I hear stories of people saying, “My employees feel that their offices at home are more comfortable than their offices at work. So how do I make the environment at work as comfortable and safe for them as it is at home?” Technology can play a big role in implementing these solutions. But deployment is another key area that we need to focus on—how do we make the technology easily deployable using solutions, like the ones from Johnson Controls, with our technologies?

How can buildings become more energy efficient?

Graeme Jarvis: Most businesses have an ESG—environmental, social, and governance—plan or set of objectives. And this is used to communicate value-based management practices and social engagement to key stakeholders—employees, investors, customers, and even potential employees. Having a sustainability-footprint objective is the right thing to do—buildings account for about 40% of the planet’s carbon footprint. If we want to start talking about how to solve sustainability challenges, the building—the built environment—is top of mind. But the economics are also motivating businesses to act, because if you can be more efficient, you can save money. So how would one do that?

You’ve got certain equipment, such as heating, ventilation, and air-conditioning systems. You have multiple tenants within a building, and they all typically pay a fee for their energy consumption in the spaces they use. What if you could give those tenants insight into what their real usage is based on seasonality factors, based on how many people are in the building, based on when they’re in the building?

Some of our solutions through OpenBlue help clients understand what is actually going on in their environments, and where are the areas they can improve. As soon as they recognize that there’s a financial consequence or a financial reward, then behavior starts to change. And then you get into the hardware, the software, the compute and AI that Johnson Controls can help with and Intel can help with. But it really starts with that ESG charge, and the fact that buildings are a large opportunity from a sustainability-improvement standpoint.

What types of technologies go into improving sustainability?

Sunita Shenoy: It’s not just now—because of the pandemic and the advent of hybrid work— that we are realizing this; it’s a known fact that commercial and industrial buildings contribute to a vast amount of carbon emissions, as Graeme mentioned. So it is our corporate social responsibility to reduce that carbon footprint. AI is becoming more advanced through the advancement of sensors. So how do you collect the data? How do you bring that data into a compute environment where AI can be applied in order to analyze and learn from it so the whole process can be automated?

In the past, a building would use manual processes, where from 8 in the morning to 5 in the evening the HVAC would be running and the lights would be running, regardless of how that building was being utilized. But since IoT has become a reality over the past seven or eight years, we’ve started to put in sensors—to utilize daylight, for example. We’ve automated the process of using AI to see the utilization of the building, and based on the utilization, the lights turn on or off as needed. And that reduces the amount of energy used in the building.

So, small steps first. First, connect the unconnected. Then assess the data in the building, and analyze where you can drive the energy-consumption optimization. And it’s not just about today and the pandemic and hybrid work; this has always been the process ever since IoT became a reality, and it is very feasible.

How do you connect systems that may not have touched each other before?

Graeme Jarvis: That’s a great word, “system.” I like to use a swimming pool analogy in which, historically, the security manager was in a lane, the facility manager was in a lane, and the building manager was in another lane. And products were sold to each manager to address each responsibility. But the way to look at this problem is really as an integrated system—we talk about smart, connected, sustainable buildings.

And now you’ve got all of this data from the edge—from security, heating, ventilation, air conditioning, the building-management system, smart parking, smart elevators, etc. When you pull all of this together, the benefit is that you can start to figure out patterns, and you can optimize around the heartbeat of what that building should be, given what it’s capable of doing with the equipment that’s in place and the systems that are in place. The first step is to assess what you have.

The next step is to look at where you would like to be three or four years from now, from an ESG perspective. And then you have to build a plan to get there. That’s the journey that most of our customers are on today. Then you can use AI and modeling to build twins. We have OpenBlue Twin, for example, to do “what if” scenarios: If I change this parameter, what might that do to the overall efficiency of the building?

Sunita Shenoy: From a technology standpoint, in any given building there are a number of disparate systems—it could be an elevator system, a water system, an HVAC system, a lighting system—and they all come from different solutions, different companies. Our advocacy is focused on using open standards. If everyone is building on open-standard protocols, then you are working off the same standards. So when you plug and play these different systems, they are able to collaborate, however disparate they are.

Graeme Jarvis: Right in the name “OpenBlue” is the word “open.” We are open because no single company can do this alone. Hence our great partnership with Intel. With open standards we can push information to third-party systems, and we can ingest information from third-party systems—all to the advantage of the customer.

Talk to us about the value of your partnership with Intel.

Graeme Jarvis: First of all, I’d be remiss if I didn’t say a little more—before I get into the technology—about the value Intel brings to our relationship. It’s all about the people. Intel has a great employee base and a great culture. They’re a pleasure to work with, from their executive leaders to their field teams.

There’s also the depth of expertise that they bring to a client’s environment, especially on the IT side. This complements our world at Johnson Controls, because we’re more on the OT side, and the IT and OT worlds are converging because of this connected, sustainable model that is a business reality.

Between the two of us we can solve a lot of customer challenges and address a lot of outcomes customers are looking to realize that neither of us could do independently. Intel silicon hardware, their compute, their edge and AI capabilities really help us bring relevant solutions—either from a product standpoint, because they are embedded with Intel compute and capability; or by enabling some of the edge capability that we bring to our clients’ environments through OpenBlue. Clients are looking for an end-to-end solution, and so that’s another area where we’re better together, and we’re better for our clients together.

Are there any final thoughts you’d like to leave us with?

Sunita Shenoy: The barrier to adoption for deploying a smart building is generally not the technology, because the technology exists, right? The solutions exist. The barrier is the people, those who need to make the decision to employ the smart-building solutions. I think the mindset of people needs to shift, and the IT and OT worlds need to collaborate by bringing the best practices of both together to solve these deployment challenges. Look at these challenges as opportunities.

Graeme Jarvis: There’s a tremendous opportunity before us as we address sustainability challenges. Those challenges are global in nature, and it will require global leadership at all levels to solve them. It can be hard to find work that is meaningful—work that provides a good economic benefit while also doing good for our planet. This call to action around the built environment is, I think, one of those kinds of work.

Related Content

To learn more about smart buildings, listen to the podcast Smart and Sustainable Buildings: With Johnson Controls. For the latest innovations from Johnson Controls, follow it on Twitter at @johnsoncontrols and LinkedIn.

 

This article was edited by Erin Noble, copy editor.

Uniting Industrial Communications with Open Standards

Over the years, few things have changed as much as manufacturing. Today, factory machines and industrial devices are constantly communicating, connecting to the internet, and exchanging massive amounts of data.

But while increased machine interactions are a transformative element of Industry 4.0, they’ve also opened the door to multiple challenges—including disparate devices hindering data transfer and a rising number of security threats.

In this podcast, we discuss attempts to reduce complexities associated with smart factories, specifically blending of IT and OT technology into a single network spanning wired and wireless technologies. We also take a closeup look at open industrial interoperability standards from the OPC Foundation, and examine how these efforts are establishing themselves as the future of the industry by enabling manufacturers to simply and securely “connect anything to anything.”

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guest: ABB, B+R, Intel®, and OPC Foundation

Our guests in this episode are:

Podcast Topics

Bernhard, Stefan, David, and Peter answer our questions about:

  • (4:20) Biggest challenges on the factory floor
  • (6:44) Overcoming industrial device communication complexities
  • (9:20) The pros and cons of today’s digital technologies
  • (12:34) The OPC Foundation’s role in industrial communication
  • (14:16) An update on the OPC-FX standard
  • (15:44) How OPC Foundation’s open standards benefit manufacturers
  • (18:13) The importance of working with partners on open standards
  • (23:20) What’s next for the OPC Foundation

Related Content

For the latest innovations from ABB, B+R, Intel®, and OPC Foundation, follow them on Twitter at @ABBgroupnews, @BR_Automation, @Inteliot, and the @OPCFoundation; and LinkedIn at ABB, B&R Industrial Automation, Intel Internet of Things, and OPC Foundation.

Transcript

Christina Cardoza: Hello and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Associate Editorial Director of insight.tech. And today we’re talking about industrial device communications with a panel of experts from B&R Automation, ABB, the OPC Foundation, and Intel®. But before we jump into the conversation, let’s get to know our guests. Peter Lutz from the OPC Foundation, I’ll start with you. Please tell us more about yourself and the company.

Peter Lutz: Yeah, so, I’m Peter Lutz. I’m responsible for the so-called field-level communications initiative of the OPC Foundation. We take care of extensions for OPC UA, to bring OPC UA to the field level. A few words about the OPC Foundation. This is a nonprofit organization developing specifications for the industry. We have currently more than 880 members, including all the big names in IT and OT. And we are then submitting our specs to international-standardization bodies, such as IEC.

Christina Cardoza: Great. Looking forward to hearing more about the foundation and your efforts in the industrial space. Stefan Schönegger, from B&R Automation, please tell us more about yourself and B&R.

Stefan Schönegger: Yes, hello everyone. And thanks, Christina, for having me on this panel today. My name is Stefan Schönegger, and I’m heading the PLC and Industrial Communication Business at B&R. And B&R is a global company, part of the ABB group, and primarily serving the OEM machinery market with leading-edge automation solutions.

Christina Cardoza: Great. And we’ll move to ABB next. We have Bernhard Eschermann. Bernhard, welcome to the show.

Bernhard Eschermann: Yeah. Thanks a lot, Christina. Yeah, I’m Bernhard Eschermann, I’m with ABB, as you already explained. ABB, as a global engineering and technology company, probably doesn’t need a big introduction. And I’m the CTO of Process Automation, which is one of the four global businesses of ABB, dealing with the automation, electrification, and digitalization of industries that produce stuff that you don’t measure in the number of pieces—that’s the discrete automation, that more what Stefan would deal with—but in details, kilograms, kilowatt hours, cubic meters, and so on. And I’ve been the CTO for a number of years in ABB Process Automation, also responsible for the process-automation products and platforms we have. And, on the group level in ABB, I lead the overall technology team as the so-called primus inter pares. And the link to the OPC Foundation is that obviously OPC Foundation has a lot of companies that are members, and I represent ABB on the board of the OPC Foundation.

Christina Cardoza: Great. And, last but not least, David McCall from Intel. David, thanks for joining us today.

David McCall: Thanks for having me. Lovely to be here. Yeah, I’m Senior Director of Industrial Standards at Intel. I think Intel needs very little introduction, although most people, or some people, may not be familiar with our involvement in the industrial space. We have a thriving industrial PC business. We’re also seeing a lot of transformation potential there as more compute gets applied to the industrial processes and we shift from being a more hardware-focused business to a more software-focused ecosystem. So I’m involved in part of that, and trying to make sure that the right standards are in place to enable that transformation, and also apply some of our expertise from other fields into the industrial space.

Christina Cardoza: Well, great to have you all joining today. The reason why we wanted to put together such a panel of experts is over the last couple of years the manufacturing and industrial space has transformed tremendously. We have more devices connected to the internet and to each other than ever before. More sensors and data coming. And so that has caused some complications for manufacturers. So, Stefan, I want to start with you and really set the stage for this conversation. What are the biggest challenges you and your company have been seeing on the factory floor today?

Stefan Schönegger: I think, first of all, it’s actually not us that faces the challenges. It’s our customers. And if we look on our customer sites—and then we mentioned in intro that we talk about the world which claims to be in the middle of an era of IoT—then basically our customers would argue that’s not yet the case. And just taking a couple of examples, we have devices in a manufacturing plant talking to each other, if at all, and not at all considering security. And we know about security and the number of security threats we are facing in our world, and manufacturing specifically is heavily exposed to that. And not considering security, I would say is really the first pain we see that our customers are facing today.

Second, I mentioned not every device is talking to each other. And, again, when we talk about IoT, and we still have a tremendous amount of equipment that’s actually not yet talking to each other, that might be for the fact that simply we have equipment out there that doesn’t have yet the capability to exchange data. But, even more so, we have an issue of a very heterogeneous world of equipment coming from different vendors, and different vendors using different standards, primarily proprietary standards. And that’s, again, something which really hinders the introduction of advanced analytics, of data really being transferred and converted into value. And OPC UA and TSN would really be the answer to tackle those questions.

And last but not least, we also need to make sure that data can be interpreted without reading a 100- or 1,000-page handbook to know what’s actually behind the bits and bytes that’s transferred over the wires. And, again here, OPC UA and the semantics that are associated with it is really the answer that our customers are looking for.

Christina Cardoza: Absolutely. And it sounds like there are a wealth of challenges and complexities in today’s smart factory and Industry 4.0. So, Bernhard, I’m wondering if you can expand on some of the challenges Stefan just mentioned, and explain why it has been so challenging or hard to get these devices to communicate to each other, to collect all the data, and to really add the security aspect into everything.

Bernhard Eschermann: Yeah, I guess a lot of that problem goes back to history, because in the past multiple of our communication standards for these so-called fieldbuses were developed by different companies, and none of the companies that developed a particular standard wanted to give it up. And even after moving to ethernet as a predominant lower-layer protocol, there are still multiple standards for the layers above. For example, to provide deterministic periodic communication of real-time data.

And I always compare this with trying to build railway lines between two cities. Instead of building one big railway line, which would be the most efficient way to get fast trains from A to B, we’ve had multiple lines that are all slower. And with this new standard that the whole industry seems to embrace, now finally we should get to something that actually gets us this one, very fast railway line.

Another challenge is that we don’t have consistent information models for data when it moves from the instruments where something is measured, through the automation and to the edge, to central service and the cloud. And OPC UA actually provides a way to have consistent information models between all of these different layers so that we don’t have the translation effort and loss of information in between the different layers. So all of that is very important to making the world of communication change dramatically in the future.

Christina Cardoza: Absolutely. And you and Stefan have teased a little bit of what we’re going to get into. Today’s conversation is the OPC Foundation and the OPC UA and other standards out there. But before we get into that, we’re talking about all these challenges and complexities, and some may be wondering that it might be too much effort, it might be too much risk than the effort is worth with adding all of these devices. But these new technologies like the Internet of Things and AI are really benefiting the manufacturing space and the factory floor.

So, David, I’m wondering, since Intel has been such a leader in some of these digital technologies that the manufacturing industry is adding, if you could talk a little bit about the benefits of them, but also the challenges to industrial communications adding them in.

David McCall: Sure. So, as we just heard, right now the industrial comms tend to mean wired. It’s deterministic because there’s some level of mission criticality involved, whether that’s tight timing requirements or a need for high reliability or both. And the networking layer is tightly tied into an industrial automation protocol that runs over it. So if you’ve got one automation protocol that means one network, and you can have trouble getting the data out from that little confined ecosystem.

You’ve got those two drivers you mentioned: the IoT in general, and then specifically AI and machine vision, which I would sort of put together. IoT generally means more devices, which means more connectivity. Not all of that is probably going to be mission critical. Some of it will be, but not all of it. And not all of it’s going to be wired in the long run either. We are expecting to see some of those wireless technologies, whether it’s Wi-Fi or 5G, coming in initially in those non–mission critical areas, where you’re adding some safety requirements or some monitoring and just getting those deployed quickly and cheaply. Because running cable is expensive, is where wireless can bring in a real benefit. But also those wireless technologies are adding in their own deterministic capabilities, so they will become part of the production line and those mission-critical control loops as well in the longer term.

Then you’ve got the AI/machine vision. Those are workloads that ingest just huge quantities of data. Most of those are currently running in the data center, which because of the deterministic challenges means they’re maybe not timing critical, but we can see huge opportunities for applying those technologies to those mission-critical, timing-critical loads. So you’re blending and blurring the lines a bit between what is traditionally thought of as an IT technology and OT technology.

So, long term we’re looking at having a single deterministic network which spans both wired and wireless technologies, and the workloads can just take the appropriate path. You’ll deploy the right technologies in the right places, and this will all be a homogenous network that any protocol can take advantage of.

But that’s a huge amount of work. You’re absolutely right that that is a huge effort, and to do that for every single automation protocol that’s out there right now just doesn’t seem feasible, which is one of the reasons we have for trying to make sure that this is all going to work over one network so that at that lower layer we’re just going to be putting in that huge amount of effort once, and then everybody can use it. So, big transitions are coming. You’re just starting to see the beginnings of it right now. But we are working diligently—the companies on this call and others—to make sure that we’ve got the standards in place, and obviously that’s what I’m mostly working on, to make sure we support that transition across the whole ecosystem.

Christina Cardoza: Great. And the good news is there are efforts being made to address these challenges and complexities. So, Peter from the OPC Foundation, what can you tell us about the work that you’ve been doing with the organization and the standards playing a role to address these challenges?

Peter Lutz: Yeah, so Stefan and Bernhard already mentioned some of the key features and benefits of OPC UA. So maybe I can give a quick summary of what is so special about OPC UA. It’s what we call an industrial framework to support interoperability, and this includes the built-in security mechanisms. It includes mechanisms to do information modeling, which is then driving also the common semantics—to have really semantics that are absolutely vendor neutral and vendor independent.

And we are working actually on extensions on the one hand side for enhancing cloud connectivity, but also bringing OPC UA to the field for the different requirements we have. For example, deterministic communication, motion control, instrumentation, and not to forget functional safety. And with these extensions we actually are able to establish OPC UA as a really fully scalable industrial-communication solution that is fully scaling from the field to the cloud. And also, so to say, vice versa, or also horizontally—so, between controllers, between field devices, between edge devices, or even between cloud systems.

Christina Cardoza: Great. And as part of the field-device challenges, I believe the standard from the OPC Foundation is the OPC FX. So, what can you tell us more about the work going on in that standard and where it is today?

Peter Lutz: Right, so OPC UA FX is the term we use for the extensions to the OPC UA framework to cover the various use cases in the field level. So this is including, for example, controller-to-controller communications, but also then controller-to–field device, including field device–to–field device communications. And this is very important to understand, that we are not creating a new generation of technology, but we really are basing the solution on the existing OPC UA framework so all the companies supporting OPC UA today can easily migrate or upgrade their products or applications to also support then the extensions for field level. And how we are doing this is we use different mappings to underlying transport protocols and physical layers. This is then very use-case specific. So, if we talk about communication to the cloud, we use MQTT. If we communicate in the field we use, for example, UDP IP.

Christina Cardoza: Great. And some of the standards manufacturers have already started leveraging. So, when we talk about all these challenges and complexities, Bernhard, I’m wondering if you can expand on the benefits that you see your customers see once they start utilizing OPC Foundation and the various standards, like the OPC UA and the coming FX.

Bernhard Eschermann: Yeah. If we start on the lower level and we take for example OPC FX, using TSN as the lower-layer communication protocol, obviously you’ve got the benefit that you can mix various types of traffic—nondeterministic event-based traffic and deterministic real-time traffic—on a single communication medium, which, for example, in our case means that for connecting the control room to cameras in the field, to sensors in the field, to actuators in the field, can all be done over the same communication medium without actually requiring separate wiring.

If you look at connecting devices that need to be powered over the network, we’ve got a standard called APL, advanced physical layer, coming that allows to provide both communication to instruments as well as the power to instruments over a special variant of ethernet. And, again, we can have OPC UA on top of it.

And obviously if we have this OPC UA layer throughout the system on different physical layers on different transport protocols, that means that the interpretation of data stays the same throughout the system, and you don’t need to have any translations. It also stays the same no matter from which particular supplier particular equipment that is involved in all of this comes. And that obviously is a large benefit in terms of the engineering that would be needed otherwise. So it helps the customers because they can connect anything to anything. It helps us because it reduces the efforts on our side for developing all kinds of different mappings and adapters.

Christina Cardoza: That sounds great for customers, taking advantage of OPC UA; it solves a lot of the headache and challenges that they’re facing today. But I know, like David mentioned, this is a challenging effort, so I know it takes a lot of partners in the ecosystem to really make it possible, to really make standards like this make a dent in some of the issues we’re seeing today. So, Stefan, I’m wondering if you can talk about why you joined the OPC Foundation to work on these standards, and how you’re working with other partners in the ecosystem to address industrial communications.

Stefan Schönegger: I think, referring to what Bernhard has said, I can only copy that, and if you summarize all the benefits that have been mentioned, in my opinion there is only one way forward, and that is all about adopting open standards, enabling open ecosystems, going towards security. So, from that point of view, we could make the answer very simple: there was no other choice. And I would even see that whoever would not take that path—either from a supplier point-of-view, like automation equipment like we are, but also from a sensor equipment, but also from companies producing edge equipment or other back-end systems, cloud systems that play in the field of manufacturing—not adopting open standards is a dead-end road. So, taking that, I think to stay on a competitive level OPC UA FX, TSN, OPC UA in general is really the only way to go.

Look maybe a step further into the future going towards more autonomous systems, again, you can’t manually interpret data, you can’t push data over gateways and still hope that the semantics haven’t been changed. Autonomous systems will require autonomously working analytical paradigms. And, again, you will end up with capabilities that are only provided by OPC UA and FX. So from this point of view that’s the only way to go.

Christina Cardoza: And it’s great to see a technology giant like Intel involved in the OPC Foundation and these standards, because I think it really helps others in the industry see OPC Foundation, OPC UA, and the OPC FX standards as legitimate standards that they should be also applying in their factory. So, David, I’m wondering if you can tell us more about Intel’s involvement in the foundation and the standards, and why you decided to help support this initiative.

David McCall: Sure. Well, I already talked about how we have this vision of more software-defined architecture coming into the industrial ecosystem, more of these advanced workloads. And we’ve just been hearing about some of the problems that we can see that could act as barriers to the adoption of those sorts of approaches. We saw OPC, and particularly the UA FX extension, turning what was existing UA technology into a true fieldbus as being a key way to overcome some of those barriers.

So we wanted to get involved and to make sure that that is a really strong, viable standard, not just at the technology level, but at a business level—certification, all the other things that go together to make a truly interoperable ecosystem. And we can then take that and then show these use cases working in the real world. So we can put together demonstrators and be right there at the cutting edge, because that’s where we do see OPC UA FX as leading the way, and showing how you can put together the technologies that are going to be absolutely critical in the next five to ten years.

Christina Cardoza: And I would love to hear from Peter’s side what the importance has been for OPC Foundation to work with Intel and B&R and ABB, as well as how you get other companies to work together on promoting the standard.

Peter Lutz: Yeah, so it’s, I think, important to understand that the OPC Foundation is elaborating the specifications together with member companies of the OPC Foundation. So, for this, we have different working groups set up, and here we heavily rely on the expertise and the know-how which comes in from these automation players like B&R and ABB, but also technology providers such as Intel. The good thing is that we have all the big support from OT as well as from IT companies already as a given.

So the good thing is we need not convince anyone, because there is broad support. All the big names in automation have committed to support the OPC UA FX as extensions for OPC UA. That means certainly as soon as the specifications are available we have then also the small and medium enterprises that are building their products upon OPC. But this is a given because the whole industry is relying on OPC UA. So this is easy for us.

Christina Cardoza: So, what can we expect next from the OPC Foundation? What standards are you going to be working to bring out next? Or how are you going to work in the future to improve these standards even further?

Peter Lutz: So, certainly we are continuously improving our specs, and this, as I said before, it’s a framework. So we are working on different elements, on different levels or layers, you could say. And I think Bernhard was mentioning some of the key technologies that are really elementary for the further success of OPC UA to cover all the requirements.

So, one key technology is certainly TSN, because it provides to us the deterministic transport, and also is key for the IT/OT convergence. But in addition, especially for the process industry, the combination with the advanced physical layer ethernet, APL, is highly relevant, because by bringing ethernet to the field in these more critical applications in the hazardous areas we open also the door for bringing in OPC UA and OPC UA FX in all these field devices.

But it’s difficult to highlight a specific extension or development. As I said, we are also working on cloud connectivity. So, overall we have a big framework and multiple working groups, so we are continuously improving and updating to the needs of the industry.

Bernhard Eschermann: Don’t forget 5G, Peter, because obviously once we have a deterministic wireless-connectivity protocol, that’s of course also a good place for OPC.

Peter Lutz: Thanks for the hint, Bernhard. Absolutely, yeah. This is why we already demonstrated that OPC UA and OPC UA FX also work over wireless connectivity. So, I picked out the two most important ones for the moment. Wireless connectivity is for sure very important, not only 5G but also Wi-Fi 6, Wi-Fi 7 support. So this is the good thing with OPC, that it’s transport agnostic, and we can so easily adapt to all the different standards that are relevant for industrial communication.

Christina Cardoza: Well, I’m excited to see how else the OPC Foundation, and B&R Automation, ABB, and Intel work to improve the manufacturing industry and really make us a smarter manufacturing—bring us into that Industry 4.0 truly. And it seems like we’ve only just scratched the surface of this conversation—we could go in even deeper. But unfortunately we are running out of time. So, before we go, I just want to throw it back to each and every one of you to—any final key thoughts or takeaways you want to leave our listeners with today, as well as where you expect the future of industrial communications will go and how your organization will be a part of it. So, David, I will start with you.

David McCall: I think the—well, Peter made the point that OPC has really succeeded in making itself the de facto future for the industry. The only question is now about how quickly is that transition going to happen? I think if you go back a few years there was maybe some questions about whether some of the bigger players would really embrace the OPC UA FX direction. I think that’s very clearly changed, and we see that all of the major vendors are looking to support their own existing protocols, plus UA FX. So UA FX will gradually become sort of the lingua franca of IoT right down at the control level. It already is mostly that going up to the cloud. All the major cloud vendors, they’re all standardizing on OPC UA for that sort of communication. So, yeah, the network, I think, is going to be shared by multiple protocols, but OPC UA FX is going to be right there at the cutting edge. And then gradually, particularly in greenfield sites and then gradually more and more across brownfield sites, it’s going to become the de facto communication protocol.

Christina Cardoza: And that’s probably a whole other conversation in itself, the greenfield versus brownfield efforts. But, Stefan, before we go, is there anything else you wanted to add or any final key takeaways you wanted to leave listeners with?

Stefan Schönegger: Well, yes. I try again to put myself in the point of view of customers, and I think if all you would like to do is to run the factory, then the last thing you want to do is talk about details such as industrial communications, and, “How I can connect one device with the other from different vendors?” So, from that point of view, I can only encourage the market, the operators of factories or the producers of equipment, of assets, to really start adopting OPC UA, because that is the answer. If you just want to focus on the efficiency and on the output and on the maybe adaptability of your factory and of your production line, then OPC UA is the answer to afterwards not having to spend any thoughts and any concerns on industrial communication.

Christina Cardoza: Great. And, Bernhard, where do you think this is all going? How is ABB going to be a part of it? Or what do you hope the listeners get out of this conversation today?

Bernhard Eschermann: Well, there’s no question that ABB will be a part of it. We’ve been strongly driving the whole effort. But I guess what people should actually look at is everybody talks about the world being driven by data, and the main thing this is about is that any data created anywhere in an industrial plant shall be available with the needed quality of service to anyplace where it’s needed—and that basically without communication engineering.

And what we should actually think much more about is how can we get value out of the wealth of data that is available in the various factories and plants around the world in order to improve the efficiency; in order to improve the energy efficiency, which is of course nowadays a very important question; in order to improve the productivity of these plants and apply all of these techniques that typically are not running in the instrument itself, but possibly at the edge or in the cloud, like machine-learning from all of these data.

And so I’m convinced that the world will benefit a lot, not from the communication standard by itself, but also from being able to make much more use of the data that is already available. And as a final thought, there are lots of places where competition makes sense—for example, about how to use that data to create useful insights. But competition, in my view, doesn’t make sense in developing competing communication standards that are all doing the same thing. That’s not a useful way of having competition. The competition should be on more valuable things than that.

Christina Cardoza: Absolutely. And it’s great to see all of you come together on this standard to make it possible and to eliminate those competing efforts out there. And since the large conversation has been around OPC Foundation and the standards you’re working on, Peter, I will end with you. Any final key thoughts or takeaways you want to leave us with today?

Peter Lutz: Yeah, a lot has been said already, but maybe just from my personal perspective, I’m absolutely convinced that OPC UA—especially with the extensions we are currently working on for field level—will become the dominating industrial communication standard, also for the field level. And for me it’s two aspects to this. On the one hand side, I believe with the framework, with the strict layering, with the flexibility, and all the features that OPC UA is providing—it’s from a technical perspective the future proof solution, and the only solution that is actually really scaling, what I mentioned before, fully scaling from the field to the cloud, which no other communication solution can provide today.

But the second aspect is more on the big support, as we have all the big players supporting it. It’s becoming the standard just because of the broad acceptance, and because OPC UA was always considered to be somehow what I typically call the neutral ground—not doing, as Bernhard mentioned, a competition on the communication interfaces and the communication solutions, but take competition out of that. And I think this is finally the success formula for a broad adoption of OPC UA across all the different levels.

Christina Cardoza: So, for our listeners today and any organizations who want to learn more about the OPC Foundation, get involved, or learn more about the standards going on within the organization, where should they look to get that information? How can they get involved?

Peter Lutz: So, certainly OPC Foundation homepage is an excellent entry point to learn more about the technology and all the activities going on. But certainly membership is certainly important, because as a member you can become more closely involved, even signing up for the different working groups. But just learning about OPC UA and the different flavors and use cases—also the YouTube channel of OPC Foundation—is, I think, an excellent entry point. Listening to the experts explaining the technical concepts and also the benefits is for sure helpful.

Christina Cardoza: Absolutely. Well, with that, I want to thank all of you for joining the podcast today, and thanks to our listeners for tuning in. If you liked this episode, please subscribe, rate, review, like, all of the above on your favorite streaming platform. And, until next time, this has been the IoT chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Video Conferencing in the Modern Workplace

Video conferencing is more than a ubiquitous feature of the modern workplace. It’s the future of work.

Adoption of video conferencing equipment and technology was accelerated by the events of the past two years, but now the trend has a momentum of its own. The global web conferencing market, valued at $12.58 billion in 2020, is now projected to grow to over $19 billion by 2025.

Yet as with any emerging technology, growth brings increased expectations—and new challenges.

Today’s enterprise users no longer tolerate lags or mediocre audiovisual quality. At the same time, they’re demanding higher video resolution than ever before: from 2K at a minimum all the way up to 8K in some cases.

In addition, as video conferences become more prevalent, they’re growing more complex. For example, aside from traditional cameras, microphones and large screens, conference rooms today often have access to work PCs, electronic whiteboards, and other devices.

Businesses also want to use advances in AI to make their virtual meetings smarter. Once a novelty, AI-enabled video conferencing features like speech recognition, automatic video and audio upscaling, audio-to-text transcription, and simultaneous translation are now expected.

“A modern, intelligent meeting requires a huge amount of parallel #ProcessingPower and integrated support for #AI applications in order for everything to run smoothly.” – Kevin Peng, Shenzhen Decenta Technology Co., LTD, via @insightdottech

Why Better Solutions Are Needed

These developments in the market represent a serious technical challenge for video conferencing system manufacturers, according to Kevin Peng, Deputy General Manager of Shenzhen Decenta Technology Co., LTD, an original design manufacturer (ODM) specializing in custom motherboards and hardware reference designs for intelligent video conferencing solutions:

“A modern, intelligent meeting requires a huge amount of parallel processing power and integrated support for AI applications in order for everything to run smoothly.”

In addition, the increasing complexity and diversity of conference scenarios in the modern workplace demand a more integrated approach than in the past. “The old way of doing things—configuring each desired functionality separately based on what the customer asks for—simply isn’t feasible anymore,” says Peng.

To meet these challenges, ODMs like Decenta, an embedded hardware solutions company, have turned to next-generation CPUs and software development suites that support AI applications. These technologies are helping ODMs develop reference designs and custom hardware that allow video conferencing manufacturers to innovate the powerful, integrated solutions that today’s enterprises want.

Technology Engineered for the Modern Workplace

To address the needs of enterprises and video conferencing system manufacturers, Decenta decided to partner with Intel®.

“Our technology partnership with Intel was a natural fit,” Peng explains, “since Intel processors excel at computing and processing under high workloads and because they offer excellent support for AI applications as well.”

Decenta uses Intel processors and software development tools to address the special challenges of video conferencing in the modern workplace:

  • 11th Generation Intel® Core processors have a robust graphics architecture and powerful multi-threaded processing capabilities—ideal for use in high-performance video conferencing solutions.
  • The integrated Intel® Iris® Xegraphics card enables ultra-high-definition video up to 8K.
  • The built-in Intel® Gaussian Neural Accelerator 2.0 supports AI applications (especially those related to audio and video quality).
  • Intel’s OpenVINO toolkit serves as a development framework to address the increasing demand for AI applications in video conferencing scenarios.
  • The Intel® Media SDK includes software development tools for improving audiovisual performance.

Using these resources, Decenta has developed a line of custom motherboards, video conferencing terminals and docks, and reference designs that can be used by video conferencing manufacturers to build high-performance, all-in-one solutions for their enterprise end users.

From LED Displays to Integrated Solutions: A Case Study

Case in point is Decenta’s collaboration with Leyard, an LED manufacturer that leads the market in small-pitch LED displays.

Leyard had the know-how and the existing product line to produce high-resolution video conferencing displays. But turning that promising start into a complete video conferencing solution was a challenge.

Working with Decenta, Leyard was able to combine their own LED displays with Decenta’s multimedia and expansion terminals. This resulted in a complete intelligent video conferencing system ideal for small and medium-sized conference rooms.

The solution enables high-quality audiovisual transmission over long distances efficiently and cost effectively. It also enriches the meeting experience for participants by facilitating connections with their PCs and mobile devices—a functionality that allows for real-time sharing of relevant documents and simultaneous translation.

“This technology is an innovation engine,” says Peng, “because it allows manufacturers to develop intelligent video conferencing products and bring them to market quickly and efficiently.”

Benefits for End Users Now and in the Future

Aside from solutions manufacturers and systems integrators, there is one other group that benefits from the work of ODMs like Decenta: the end users themselves.

In terms of user experience, advanced video conferencing solutions provide smoother, clearer, higher-definition audio and video during meetings. They also support AI-enabled smart meeting applications like intelligent meeting minutes, interactive whiteboards, two-way annotation, and so forth.

These systems also offer flexibility. They can be used “plug and play” with a wide range of third-party cameras, speakers, and other AV equipment. They also work with many different cloud-based video conferencing software solutions.

And on a very practical level, they’re just easier to deploy. A unified dock, for example, greatly simplifies the work of wiring up various devices ahead of a meeting. All-in-one solutions are compact, allowing enterprise users to deploy them in out-of-the-way areas of conference rooms.

All of this adds up to a better user experience and improved productivity without the headaches of legacy systems: the seamless solution that the video conferencing market has been asking for.

“Better video conferencing solutions will drive the digital transformation of the office,” says Peng, “and open up possibilities in other fields too, for example smart education and telehealth applications.”

The mass adoption of video conferencing has had its share of hiccups and frustrations. But what started off as a necessity in the darkest days of the pandemic has evolved into a better way for people to work together. The future looks bright for advanced video conferencing solutions, both in the enterprise and beyond.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Powering Sustainable Manufacturing with Edge AI

For Industry 4.0, sustainability is more than just a lofty ideal. It’s a legal and financial responsibility—one that needs to be measured as precisely as any other KPI.

On the regulatory side, manufacturers are under increasing pressure to meet carbon emissions targets set by governments. Investors also pay greater attention to companies’ environmental, social, and governance (ESG) practices, of which sustainable energy management is an integral part.

In an era marked by growing concern over climate change, the trend has broad support. And has resulted in major challenges for industry.

For one thing, it’s hard to find new ways to increase energy efficiency, especially since sustainability goals often need to be aggressive. And beyond this, capturing and reporting on sustainability data—both to regulators and to shareholders—often entails a tremendous amount of work.

“For smaller manufacturers, basic compliance presents a problem,” says Julia Chih, Product Manager for Advantech, a Taiwan-based manufacturer of smart products. “Larger businesses have more resources, but they often lack the organizational knowledge to automate data collection and reporting—and paying a high-priced consultant to do the work isn’t an attractive option.”

It’s a difficult situation for manufacturers. But AI-enabled smart factory solutions may provide the answer: a system that offers rapid ROI and is flexible enough to be future-proof against changes in the regulatory and reporting landscape.

The power of #SmartFactory systems comes from their design. There are multiple layers of #technology, each one playing a specialized role in the solution as a whole. @Advantech_USA via @insightdottech

IoT, Edge AI, and Cloud: A Multilayered Solution

The power of smart factory systems comes from their design. There are multiple layers of edge AI technology and connected devices, each one playing a specialized role in the solution.

On the factory floor, IoT devices and edge AI use deep learning to handle data acquisition and enable real-time process optimization.

Sensors and smart meters are deployed as needed to collect data from industrial machinery. They report on data points like performance, power consumption, temperature, and water usage. This provides visibility into what is happening in the factory and is the essential first step in identifying waste and gathering the raw data required for reporting. Edge AI is used to improve efficiency by processing the sensor data in real time and automatically optimizing the production line through the factory’s SCADA systems (Video 1).

Video 1. IoT and edge AI are used in sustainable manufacturing to visualize and improve energy management and overall equipment effectiveness. (Source: Advantech)

Behind the scenes, data is sent to the cloud for further processing. At this level, business operators can use the collected data to generate regulatory and ESG reports. The mass of information produced by a smart factory can also be used here by big data and AI applications to extract additional insights and develop longer-term optimization strategies.

This modular, layered architecture means that edge AI use cases for sustainable smart manufacturing solutions are inherently flexible. Factories can configure such solutions to meet their operational and reporting needs. They can also adjust them as needed if sustainability targets or reporting requirements change in the future. Henry Chen, Advantech’s Business Development Manager, says that Intel® technology has been particularly helpful in this regard:

“Intel processors excel at both edge AI applications and heavier server workloads. The capabilities are well defined and documented, which makes it easy for us to choose the correct processor to meet the customer’s specifications no matter the scenario.”

Sustainable Manufacturing in Mexico

Advantech’s deployment at a Foxconn facility in Mexico shows how smart factory systems can bring about dramatic improvements in sustainability with minimal capital investment.

Foxconn needed to comply with the local environmental regulations. It also wanted to meet an internal goal of reducing carbon emissions across their manufacturing sites worldwide.

As a global manufacturer, Foxconn was looking for technology that would both deliver the results it needed in Mexico and could be used at factories in other countries. Ideally, they wanted a solution to improve production processes that could be managed via a standardized, centralized system.

Advantech worked with Foxconn to install edge AI devices, smart sensors, and power meters throughout their Mexico facility. The companies worked together to set up an always-on data collection system and connected it to a back-end that could be monitored remotely from a central location. They also implemented an energy management optimization strategy.

The results were striking. There was an immediate improvement in energy efficiency, representing an 8-13% cost savings on average. In addition, Foxconn discovered that the new visibility into energy consumption could be used to develop a capacity forecasting plan, helping the company avoid overage penalties with their utility and reap additional savings in the long term.

AI and the Future of Sustainability

The short-term results that a smart factory solution can produce are impressive, and the promise of rapid efficiency improvements will no doubt drive a wide-range adoption in the manufacturing sector. But in the future, the ability to capture data from the factory and mine it for business insight may open even greater opportunities.

For instance, the IoT and AI in smart factory settings can be used for prognostics and health management (PHM). PHM, the practice of monitoring the health of machinery and performing proactive maintenance to prevent unplanned shutdowns, has a clear business case. “If you can repair a machine before it fails,” says Chen, “you reduce downtime—something that is extremely costly for manufacturers.”

And PHM is just one example of the wider potential of smart factories. As the digital transformation of industry accelerates, manufacturers will continue to find innovative uses for the technology. For this reason, Advantech has decided to open-source their AI platform. “The future is going to require flexibility and openness,” says Chen, “because companies will want to build their own applications to take advantage of the opportunities offered by IoT and AI in the factory.”

In the coming decade, this should lead to a richer, more comprehensive model of sustainability—one that delivers long-term value for both manufacturers and communities alike.

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

This article was originally published on October 13, 2022.

ITS Puts Smart Cities in High Gear with 5G and Edge AI

Remarkably, Intelligent Transportation Systems (ITS) got their start back in the 1960s when the U.S. Federal Highway Administration began developing the Electronic Route Guidance System, or ERGS. The system used a two-way device on vehicles, intersection hardware, and some of the first computerized central IT systems to analyze weather and traffic conditions, then provide motorists with the best directions to their destination.

Fast-forward more than 50 years, and ITS are being positioned as the backbone of smart city infrastructure, delivering traffic management, commuter notification, public safety, and countless other services. They’re even being designed to accommodate autonomous vehicle travel by natively supporting vehicle-to-everything (V2X) communications that connects over wireless radio access networks (RANs).

Of course, supporting these services means the existing transportation management infrastructure needs an overhaul. For instance, you can’t analyze and act on dynamic traffic flows in time to avoid traffic jams without edge AI, and you can’t run edge AI without sufficient edge computing. Autonomous vehicles require V2X connectivity to communicate with one another and roadside traffic management systems, but you can’t support V2X communications without an ultra-reliable, low-latency network like 5G.

Traditional ITS can’t support either.

“The platform needs to provide a seamless infrastructure for deploying applications that require low latency, high-performance compute, and high reliability at the edge in smart city or connected highway environments,” says Charo Sanchez, Global Alliance Manager at Advantech, a global leader in IoT and networking platforms. “When we talk about edge here, we’re talking about the far edge—on the highway or mounted to a traffic light—the closest to the traffic agent, vehicle, or pedestrian that you can get.”

Sanchez adds, “Apart from the connectivity, you also need to integrate the AI part where you can extract relevant data, process that data, and act on it on site.”

“Apart from the connectivity, you also need to integrate the #AI part where you can extract relevant #data, process that data, and act on it on site.” – Charo Sanchez, @Advantech_USA via @insightdottech

Smart RSUs: Bringing it Together at the ITS Edge

To help bring ITS infrastructure in line with today’s smart transportation and smart city requirements, Advantech collaborated with Capgemini and Intel® to create the 5G Smart Road Side Unit (RSU).

The 5G Smart RSU is a multi-access edge computing (MEC) platform deployable as a highly localized, disaggregated node that brings 5G and AI capabilities to the far edge of ITS networks. Built on hyperconverged, Intel-based Advantech SKY-8000 Servers and the Capgemini ENSCONCE framework, it also saves ITS engineers from having to build those complex edge nodes from the ground up.

Another advantage of the 5G Smart RSU is the cloud-native environment it provides for microservice development, deployment, and delivery. This is streamlined by the Intel® Smart Edge Open software toolkit, an open-source development suite that provides plugins, integration recipes, and other components to help ITS developers merge IoT workloads with the 5G wireless infrastructure.

It even contains reference implementations to further accelerate development and delivery of connected edge applications.

“Intel Smart Edge Open helps accelerate time to market and streamline complex network workloads by using reference architectures optimized for Intel hardware,” Sanchez explains. “It provides basic developer tools for integrating common multi-access edge computing use cases.”

From that foundation, the 5G Smart RSU gives developers access to a range of additional tools and capabilities for next-gen ITS rollouts. At Mobile World Congress in February/March 2022, the three partners demonstrated a pedestrian safety application built with the Intel® OpenVINO toolkit visual computing SDK integrated in the Smart RSU software stack. In it, video of an unexpected street crossing is analyzed by AI algorithms running locally on the Smart RSU, then real-time alerts are issued to nearby motorists and pedestrians to help prevent a potential accident.

Going further, the partners showed how information captured by all the sensors connected to a Smart RSU can be used to create digital twins that help continuously evaluate and improve current infrastructure as well as test new technologies prior to full investment.

The Evolution of Smart City ITS

Roadways are the arteries of a smart city, and they must be optimized to keep urban centers humming. This means the ITS that manage them must evolve to support capabilities like 5G and edge AI—a transition that’s made seamless thanks to full-stack partnerships and development solutions like Smart Edge Open.

Hyperconverged endpoints like the 5G Smart RSU offer a path forward for smart city transportation infrastructure, whether it’s decades old or already supports features like V2X functionality via an ad hoc assortment of cobbled-together networks and systems.

“This is how to consolidate all those functions in one single platform, virtualizing the roadside unit, and running it on standard x86 hardware,” Sanchez says. “That streamlines how you manage the whole solution and gives extra room for innovation and functionality to deliver additional traffic services. This is an evolution of roadside units with the Advantech edge server operating as a micro datacenter at the ITS edge.”

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech

Intel® Innovation 2022: Democratizing AI for Everyone

Before last week, the last Intel® event I attended in person was the Intel® Developer Forum (IDF) in 2016. It’s hard to believe, but the company was just rolling out its 7th Generation Intel® Core processors back then.

Now the company is on its 12th Generation and 13th Generation Intel® Core processors, which was just announced at Intel® Innovation 2022. The latest 12th Gen Intel® Core processors feature a new performance hybrid architecture, enhanced AI and vision capabilities, expanded bandwidth, and fast memory—perfect for IoT applications. The 13th Gen Intel® Core processors continue the hybrid Performance and Efficiency core architecture of the previous generation but add optimizations that yield 15% better single-threaded and 41% higher multi-threaded performance—great for gaming, streaming and recording.

And that was just the tip of the iceberg when it came to Innovation 2022 announcements. Beneath was a whole lot of AI innovation, headlined by Intel® Geti, which Gelsinger announced in his keynote.

Intel Geti is a new AI platform designed to streamline the time-consuming process of dataset labeling by offering an intuitive environment and annotation tools that let computer vision model training commence with as few as 20 images. The platform is accessible to data scientists, domain experts, and AI developers alike, who can leverage it to output production-ready deep-learning models in formats like PyTorch, TensorFlow, or as neural networks that can be optimized by the popular Intel® Distribution of OpenVINO Toolkit.

Democratizing AI, Live at the Edge

But Geti wasn’t the only example of AI innovation at the event. Many Intel partners showcased their innovative edge AI solutions built on OpenVINO. This included PreciTaste, whose object recognition software helps Chipotle monitor food stock so there’s always enough on hand.

Regardless of where you are in the development lifecycle, where your application sits on the #edge-to-#cloud continuum, or even your skill set, technology is evolving to simplify and accelerate your #AI experience. @intel via @insightdottech

Elsewhere at the edge, Eigen Innovations demonstrated how its OpenVINO-based software stack revolutionizes automated optical inspection (AOI) for manufacturers by integrating real-time control system and environmental data with AI inferences. And while a picture archive and communications system (PACS) itself isn’t unique, JelloX showed how its MetaLite PACS with Intel-powered AI can integrate with radiology and other hospital systems to create AI-enabled imaging and digital pathology platforms.

While those use cases are on the more advanced end of the spectrum, meldCX has recognized the need to educate the next generation of technologists on AI fundamentals. It showcased how this could work at Innovation 2022, where an edge AI object recognition stack identified Legos that comprised a rover and instructed users how to assemble them properly. From there, a video game let participants drive a digital twin of the rover around Mars, where it could encounter and learn about real rovers like the Perseverance.

But Innovation 2022 exhibitions didn’t just include AI for end users. The Deci AI neural network optimizer, for example, doubles down on OpenVINO to improve inferencing performance by 4x compared to the toolkit alone. MindsDB, on the other hand, presented OpenVINO in a non-imaging use case by folding it into its in-database machine learning platform, where developers can create applications like a real estate cost estimator on the fly.

I was also able to catch awesome demos from Awiros, Icuro, and a hands-on OpenVINO walkthrough from Intel AI Evangelist Paula Ramos. Awiros showed how it’s working to simplify the model creation and development journey with a single integrated app marketplace, while Icuro featured how OpenVINO and Model Zoo can be used to fast-track deployment of intelligent systems from cloud development to actuation.

AI Innovation for Everyone

Regardless of where you are in the development lifecycle, where your application sits on the edge-to-cloud continuum, or even your skill set, technology is evolving to simplify and accelerate your AI experience.

Whether your sweet spot is the new Intel Geti AI democratization platform, the OpenVINO edge inferencing optimizer, or the hardware it runs on, Intel Innovation 2022 covered it from end to end.

And if you missed it, select event content is available on demand now.

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Harnessing AI for Good to Improve Nutritional Planning

When families can’t provide the around-the-clock care and comfort an aging parent or loved one needs, they rely on aged-care facilities for help. But for all the great care these facilities strive to deliver, they’ve historically faced challenges with nutritional planning.

For example, a recent report found that more than two-thirds of residents in the country’s aged-care facilities experience malnutrition. In addition, the report found 80% constantly lose weight from the day they arrive at these facilities. And a separate U.S. study found that up to half of residents in skilled-care facilities, such as nursing and assisted-living facilities, experience malnutrition.

Closing Nutritional Gaps for Seniors

Understanding what seniors eat, how much of it, and the nutritional value of what’s on their plates is critical for aged-care facilities to better meet seniors’ dietary needs and help them avoid issues like malnutrition and high cholesterol.

A multinational catering company experienced this challenge firsthand when supplying food to aged-care facilities. The company had an individual nutritional plan for every resident, but it occasionally received complaints from residents’ family members about food quality. It wanted to not only serve its customers better but put families at ease.

The company turned to AerMeal, a scanning device with machine learning and computer vision capabilities designed to address these challenges by using AI for good. The solution gathers nutritional data and images of what residents are eating and shares it with their loved ones—providing reassurance their loved ones are eating healthy meals.

“AerMeal, which has integrated #IoT capabilities, a WiFi-enabled 4G modem and an #RFID reader, doesn’t just scan and take pictures of an individual’s plate. In under a second, it also captures volumetric information.” – Abbas Bigdeli, Aervision via @insightdottech

“Just that alone has made a big difference. It’s giving comfort to the families,” says Abbas Bigdeli, CEO of AerVision Technologies, a company based in New South Wales, Australia that manufactures AerMeal.

AI for Good: How AerMeal Works

AerVision is more commonly known for developing custom biometric, AI, and IoT solutions for the security industry, but today it is applying similar technology to improve nutritional planning. AerMeal uses AI-driven inference and image recognition technology to scan the contents of an individual’s plate before and after their meal, gathering nutritional information that supports personalized diets for aged-care residents and other individuals in large food distribution settings, like school cafeterias.

Each plate is outfitted with a dishwasher-safe radio frequency identification (RFID) chip. When staff in an aged-care facility serve residents their meals, they use AerMeal to scan the plate and then assign it to a specific resident. In a cafeteria setting where residents may pick up their food, staff also can put it on the scanner and use a touchscreen to assign it to the resident before that person takes their plate. When the resident is done eating, their plate is scanned again to collect data on how much food they consumed.

Bigdeli says AerMeal, which has integrated IoT capabilities, a WiFi-enabled 4G modem and an RFID reader, doesn’t just scan and take pictures of an individual’s plate. In under a second, it also captures volumetric information.

“Using that information and doing clever computer vision and AI on the edge, we can then determine what type of ingredients were on there, whether it was a steak, asparagus, or broccoli. All of those are recognized on the plate,” he says. “With the volumetric information, we measure the volume. And from that—because we know, for example, it was this many cubic centimeters of broccoli or this many cubic centimeters of steak or chicken breast—we can then calculate the nutritional value with very high accuracy.”

It does this with the power of three Intel® technologies—RealSense™, Core™ M processors, and OpenVINO™. RealSense, a computer vision technology, allows AerMeal to capture 3D food images. The Core M processors provide the powerful computing and data processing power AerMeal needs, while OpenVINO provides image inference capabilities on the edge.

Together, these technologies allow AerMeal to deliver deep nutritional insights to its customers.

Supporting Better Health with Better Nutritional Planning

With AerMeal, AerVision hopes to democratize nutritional data and empower individuals with information they can use to improve their health.

While aged-care providers primarily use the solution today, Bigdeli says AerMeal has the potential to be a go-to kitchen appliance for health-conscious consumers.

“This could become an appliance where they—as long as it’s connected with the cloud and they have an app on their phone—can monitor what they’re eating,” he says, adding that this data then could be linked to other health solutions to help individuals better manage chronic conditions like diabetes or high cholesterol.

AerMeal also may advance sustainability and reduce food waste in settings such as buffets, and company and school cafeterias, allowing these facilities to make more data-driven purchasing decisions and better manage their supply chain.

“In any industry where they serve food and they care about waste, but also care about personalized service, this solution could be utilized,” Bigdeli says.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

AI in Manufacturing: The Key to Data-Driven Cultures

As Industry 4.0 transformations take off, manufacturers need to create a data-driven culture. With more devices connected and outfitted with sensors, there is an influx of valuable data waiting to be uncovered. But to properly collect, store, and analyze data to enable business decisions, artificial intelligence (AI) and machine learning (ML) models are key.

Unfortunately, developing and integrating models into production and overall operations can be a complicated task. For starters, not everyone has the knowledge and skill set to apply advanced models to their workflow—resulting in users having to wait for data science teams to analyze and interpret data for them.

And that’s when and if companies have data scientists available. Many small and midsize manufacturers don’t. Instead, they rely on AI technology providers to deliver the new features and functions they need—which is also time-consuming and can take up to a year.

If more workers had access to the skills and tools, manufacturers could start rapidly adding new products and features or even lower energy consumption.

At least that’s the idea behind Taiwan-based company Profet AI, which is working to democratize AI in manufacturing with its Auto ML solution designed to make training ML models as easy as creating an Excel spreadsheet.

“We provide a three-hour training session that teaches users basic features they can start applying to everyday tasks,” says Marc Wu, Business Development Director of Profet AI, a virtual data scientist company.

Leveraging Digital Data

Profet AI’s Auto ML is a no-code AI platform designed for rapid model development. The company integrates with the Intel® Distribution of OpenVINO™ Toolkit to gain speed for its computations. This allows domain users at small and midsize companies to leverage it without the help of a data science team. For large companies with data scientists, the platform can act as another member of the team.

“When the domain user wants to solve a problem, they still collect the data and upload it to our platform. Our platform will automatically calculate the data and do some data cleaning and modeling, and then compare the models and come up with the best model,” Wu says.

The process is similar to working with data scientists. “But the difference is the domain user can do it by themselves,” he says.

“We believe that #AI should be an application #technology, not a very high-end technology only owned by a few people, so we designed our product to target domain users.” – Marc Wu, ProfetAI via @insightdottech

A big advantage of Profet AI’s platform is speed, according to Wu. “In the traditional way, if you have to pass a project to the data team, it usually takes about two to three months to get the result back. Via our platform, it usually takes about one week,” he explains.

If a manufacturer wants to launch a new product, the R&D team can feed data to the model to determine the best production parameters. In a traditional approach, the company would likely do a trial run. With Auto ML, the company can run a simulation before an actual test run, saving considerable time and expense, according to Wu.

The company has already helped factories of all kinds build models leveraging their manufacturing data as well as operational data on areas such as power consumption.

For instance, a manufacturer of glass objects used Auto ML to get into the medical device market. A printed circuit boards (PCB) maker was able to minimize the amount of gold it uses in plating. And another company cut its monthly energy bill by $30,000 after increasing production during non-peak hours.

Entering New Markets

Being able to move faster is a huge benefit for manufacturers using Auto ML. In the glass manufacturer’s case, after the company received a proposal to make a glass part for medical devices, it was able to test the parameters and prepare a response rapidly with the solution.

“Because their response was so fast, they successfully got the order from this new medical-devices customer,” Wu says. And then the company was in the medical-device business.

This kind of outcome separates Profet AI from other AI vendors, who target their products to data scientists, says Wu. “We believe that AI should be an application technology, not a very high-end technology only owned by a few people, so we designed our product to target domain users,” he says.

Having developed more than 120 AI applications across 10 different manufacturing factories, Profet AI also provides “Ready to Go Applications” that can serve as an AI template for customers, according to Wu.

It’s a big help to customers that don’t know where to start when they implement AI, Wu explains. The applications are downloadable, and Profet AI provides step-by-step tutorials and sample datasets.

“When they see the dataset, they immediately understand how to do that with our platform and how to collect the data by themselves,” Wu says.

Taking AI in Manufacturing Even Further

To improve its platform, Profet AI is constantly listening to customer feedback. One customer asked for mobile access to a model it created so on-site users could use the model.

“We actually created this feature and put it into our product,” says Wu. “Right now, the model can be trained by, for example, a process engineer, and actually can be run by the local operator using their mobile devices.”

Wu says the integration of Intel’s OpenVINO into Auto ML helped make the product even better. “We believe this can bring a better usage experience to our customer,” Wu says. In the latest generation of Auto ML, the inference speed increases by as much as 100%, he says.

Ultimately, the goal is to make AI ubiquitous in manufacturing through easy accessibility for regular users. If the company succeeds in this goal, in due time training data models will become as common as creating an Excel spreadsheet, Wu hopes.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Healthcare Data Challenge: Linking Medical Devices

Healthcare providers depend heavily on data to treat patients. When a patient gets any type of test done, whether it is an MRI, x-ray, or blood test, data is captured for analysis and diagnosis. The machines that generate the images or analyze the bloodwork transmit the data back to the healthcare systems so doctors can determine and evaluate the results.

Sounds easy enough, but it doesn’t always happen that smoothly. For that data to make the trajectory from the lab to a doctor’s office, a lot must happen behind the scenes. For one thing, multiple healthcare devices are used to take a patient’s vitals—blood pressure, heartbeat, height, and weight, for example—and those devices don’t communicate with one another.

In addition, patient data must travel across hybrid infrastructures made up of on-premises, cloud components, and, increasingly, IoT and edge networks. These networks also don’t always use the same language. And then there are the software systems that package and handle data differently from one another.

For patients to get the best care possible, their data must move between devices and analysis systems to the doctors’ computer screens.

“There are more and more electric devices and computers being deployed to record patient data than ever. This creates a big challenge trying to gather the protocols from all these vendors and then transfer the data in the correct format for hospitals and patients to receive,” says Kenneth Lee, Product Manager at Portwell, a Taiwan-based industrial PC manufacturer.

Addressing Healthcare Data Challenges

Portwell is looking to tackle these issues with its standalone box at the edge, the NANO-6063. With NANO-6063, healthcare providers can deploy a small, nondescript embedded system on the box, which enables them to send and receive instructions from multiple servers and healthcare devices—linking solutions to deliver patient information to doctors, nurses, and patients.

The solution is designed with hospital/clinic needs in mind, such as low power consumption, compact design, sufficient I/O, long lifespan, and harsh conditions. It does this using the NANO-ITX form factor and the Intel® Atom x6000E processor series. But more than just a data traffic cop, the NANO-6063 acts as an interpreter by converting data signals to make sure the information will be understood when it arrives. This is necessary because manufacturers often use different protocols while hospital systems accept data in a variety of different formats.

“All this data needs to be collected and analyzed for doctors and providers to better diagnose and monitor their patients’ healthcare conditions,” says Lee.

Entering into New Market

Traditionally, Portwell has targeted other markets, such as electronic signage and factory automation, according to Lee. But with healthcare becoming more dependent on digital systems, the company saw an opportunity to leverage its existing technology to help providers solve their healthcare data management problems.

For patients to get the best care possible, their #data must move between devices and #AnalysisSystems to the doctors’ computer screens. Portwell via @insightdottech

For example, one existing manufacturer customer uses Portwell’s technology to control robotic arms on the assembly line. “Similarly to how a hospital works, they have a server for the factory to control the whole production line. But on the production line, they have a small box to control each robotic arm. Using this control box, they can execute some simple orders from the server, and then collect data to provide feedback to the server,” Lee says.

A similar approach can be used in hospital settings, he says. But instead of controlling communications between servers and robotic arms, the NANO-6063 enables transmission of data between medical equipment and hospital servers.

Another example of Portwell’s NANO technology in action involves giant digital-signage screens in stadiums. Often the screens are deployed outdoors, which means they may be exposed to all kinds of weather—snow and cold temperatures in winter, high temperatures in summer, and rain any time of the year.

The advantage of using a NANO-6063 in these situations is that the system boards are built to withstand extreme conditions, says Lee. The ruggedness is also useful in edge networking locations—whether indoors or out—that lack the climate control systems that data centers have, he says.

Faster to Market

Key to the development and marketing of the NANO-6063 is the use of Intel chips, in this case the Atom x6000E series. Currently, 90 percent of CPUs used by Portwell come from Intel. And the company plans to continue leveraging the Intel relationship as it forges its new path into the healthcare market. Being a part of Intel’s early-access program allows the company to get access to new processors before they launch, allowing them to get solutions to market faster.

Portwell also works with system integrators to make solutions like the NANO-6063 possible. Since healthcare facilities like hospitals use many different devices or computers from a variety of vendors, Portwell provides system integrators with the computing parts they can combine in their software solutions to enable the receiving, translating, and directing of data.

“As there is an increasing reliance on digital systems and data for healthcare providers to do what they need to do, we will look to create new hardware and software solutions and transform the space,” says Lee.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.