Phygital Experiences Help Fashion Retail Shine

Retailers are reimagining what it means to go shopping: Blending physical and digital elements to create “phygital” experiences for customers. The trend has been underway for a while, and was accelerated by the events the past few years. But the biggest driver of this current wave of retail digital transformation is coming from consumers themselves.

“In retail, we’re now seeing a new, hyper-demanding type of customer,” says Javier Lima, Director of Digital at Econocom Products and Solutions, a manufacturer of digital transformation kits for fashion retailers. “They want everything in real time—and they want brands to truly see them as individuals.”

To deliver this level of convenience and personalization in stores, many retailers have turned to digital signage solutions—which are also excellent ways to boost sales and streamline operations. But for retailers in the fashion industry, implementation isn’t easy. When it comes to matters of style and taste, messaging has to be extremely personalized in order to be effective. And given the nature of fashion, digital signage in this corner of retail is unusually context-dependent and subject to change.

Answering these challenges for fashion retailers: all-in-one digital signage kits. These powerful, flexible solutions are helping brands deliver the phygital experiences they need to compete. And best of all, these solutions are easy to configure, which means that they scale extraordinarily well—even in the largest of businesses.

Scalable, Interactive Media Benefit Retail Sales and Operations

Case in point: Econocom’s success with Pull&Bear, a multinational clothing retailer based in Spain.

Pull&Bear wanted to implement digital signage across all of its brick-and-mortar stores. But this was a daunting task. The brand has more than 800 physical locations in 40 countries, in locales as diverse as the EU, Latin America, the Middle East, and Asia. Any digital signage solution used would need to be straightforward to set up and maintain at scale, and would have to allow for localized content that could be easily curated and managed.

Econocom’s Fashion Retail Digital Transformation Kit turned out to be exactly what Pull&Bear was looking for. The solution is based on the integration of hardware, software, and cloud technologies. Digital marketing material is uploaded, configured, and scheduled in a backend content management system—allowing for extremely fine-grained control over the timing and distribution of media content. Specific content is then deployed from the cloud to media players in individual stores, which are in turn connected to display screens throughout the customer area. The in-store part of the system also includes options for smartphone integration to help store personnel manage and configure display content in a more agile way. The kit also allows for add-on smart devices to be connected at display endpoints if a more interactive solution is desired.

Together with Econocom, Pull&Bear successfully implemented digital signage kits in over 800 of its stores worldwide, enabling highly contextual, personalized messaging in many different locations. This allowed the retailer to leverage its robust digital marketing team to customize content for shoppers no matter where they were located.

However, an impressive in-store experience is only one part of what digital signage solutions offer to fashion retailers. “If you just want to play the ‘wow’ game, that’s fine, that’s great,” says Lima. “But in this case, we were looking to reach the full potential of digital signage by deploying our solution across a massive base of stores. Because it’s only then that you start to achieve the economies of scale that benefit a retailer’s operations—and their bottom line.”

Econocom’s technology partnership with Intel was a major factor in delivering a solution that scaled so well, says Lima: “There were obviously huge challenges to rolling out our kit in more than 40 markets and across hundreds of stores. Intel’s operational support was essential. Our relationship has been one of synergy and reciprocity, and we expect even bigger things from this partnership in the future.”

The rise #ComputerVision at the #edge means in-store #DigitalSignage doesn’t have to be reactive; it can be highly interactive. Econocom via @insightdottech

The Future of a Phygital World

As to what that future will look like, there are exciting possibilities on the horizon.

The rise computer vision at the edge means in-store digital signage doesn’t have to be reactive; it can be highly interactive. Content displayed on screens can shift in real time to match a customer’s behaviors—for example, displaying relevant media if they pick a particular piece of clothing up off the shelf.

Beyond fashion and retail, digital signage solutions will find use cases in other sectors and industries as well. Econocom, for example, is already looking for ways to adapt its retail fashion kit for financial services businesses. Other use cases are sure to follow—which Lima says is really part of a larger phenomenon:

“In retail we call people ‘customers,’ in hospitality, ‘guests,’ in medicine or smart cities it’s ‘patients’ or ‘citizens’. But in the end, it’s the same game: Leverage technology to deliver a better experience—to meet people’s needs and deliver the value they expect.”

And as people come to expect phygital experiences in certain areas of their lives, they’ll likely begin to demand them in others as well. The result, according to Lima, will be a far more interactive—more phygital—world. That world will not only meet its inhabitants’ expectations, but also will also bring substantial benefits to businesses and organizations as well.

“The future is extremely bright,” says Lima, “because when you mix the physical and digital, there are limitless opportunities for optimization.”

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Redefining Rail Inspection with AI-Based Computer Vision

Railroads are the connective tissue of our world’s infrastructure. But despite their critical role in global transportation and supply chains, most railroad track maintenance begins with a human inspector.

To identify damaged rail ties and tracks, human inspectors walk or drive miles of railroad every day looking for inconsistencies. The total time and cost required to manually inspect just the United States’ 160,000 miles of track is unquantifiable, and since humans conduct the inspections, it’s also inherently error prone.

Recent advancements in computer vision (CV) have opened new opportunities to automate railroad inspection, significantly reducing costs and improving accuracy in the process. But railroads present unique challenges for CV systems that range from high variability in the deployment environment to a safety-critical industry’s preference for trusted solutions.

That’s why organizations like Ignitarium, a product engineering company, are reinventing CV using AI technology to address pain points and reduce the need for human track inspection.

Overcoming Challenges in Infrastructure Inspection with AI-Based CV

Unlike controlled indoor environments where CV systems have a proven track record, railroads present a wide range of lighting conditions, weather variations, and other unpredictable factors. These variables can significantly impact the performance and accuracy of CV systems.

Another uphill battle for CV technology in outdoor railroad applications is changing what Ignitarium CTO Sujeeth Joseph calls “a highly traditional mindset in the industry,” referring to rail professionals’ desire to use tried-and-tested methods over novel approaches.

#Railroads present unique challenges for #CV systems that range from high variability in the deployment environment to a safety-critical industry’s preference for trusted solutions. @ignitarium via @insightdottech

These challenges led to the development of Ignitarium’s TYQ-i platform, which aims to blend the best of classical CV techniques with advanced custom neural nets. The result is an efficient solution that can detect a wide range of anomalies over many miles of track.

The operation of TYQ-i can be broken down into four stages:

  • Ingestion: The platform supports many visual sensors, including RGB, 3D, laser, and multi-spectra interfaces. In the rail industry, 2D cameras and laser scanners are the go-to sensors, says Joseph.
  • Preprocessing: Ignitarium has developed a library of image processing components that prepare the data for analysis. These include basic operations such as scaling and rotation as well as more complex tasks like stitching, tracking and noise reduction.
  • Deep learning: At the heart of TYQ-i are custom AI models for specific use cases. These models have been pretrained to detect various anomaly classes, delivering high levels of accuracy and efficiency with minimal input from the customer.
  • Presentation: The processed data is then presented to the user through dashboards and files that are human- and machine-readable. This enables the platform to integrate seamlessly with existing processes, helping overcome resistance to adopting new technology.

According to Joseph, one example of how these capabilities can be used is railroad ballasts—the track-bed on which railroad ties are laid. An airborne drone or camera mounted on the underside of a locomotive could use TYQ-i to detect areas where a ballast needs to be replenished, as well as areas that should be avoided due to safety or other operational concerns. That information would then pass to the ballast laying and tamping machine, so it could automatically perform maintenance in only the appropriate areas.

Achieving Scale and Flexibility with TYQ-i

To obtain the accuracy and reliability of Ignitarium’s TYQ-i platform, it initially underwent training using TensorFlow and PyTorch—two of the most popular open-source frameworks for machine learning and neural networks. This training was initially performed on powerful Intel® CPU and GPU targets, providing a solid foundation for the platform’s AI capabilities.

But to truly scale performance across a variety of use cases, Ignitarium recognized the need for a more versatile processing solution. This led to the decision to migrate TYQ-i to Intel® Core and Xeon® processors. While not a common target, there’s even a port on the Intel® Arria family of high-performance FPGAs.

The interoperability of these processors helps the company keep costs in check. “If the workload is heavier, we would go with server-class machines,” explains Joseph. But for lighter worklo­ads, the company uses solutions like the 12th Gen Intel® Core processor, which can accelerate AI with its in-built Intel® HD Graphics integrated graphics processors (IGPs).

The migration to Intel® processors also brought additional benefits. For instance, it allowed Ignitarium to take advantage of the robust software infrastructure of the Intel ecosystem, which offers a wide range of tools and resources to optimize performance and efficiency.

One such tool is the OpenVINO AI toolkit, which Ignitarium used to further optimize TYQ-i. OpenVINO is designed to facilitate the deployment of AI applications at the edge, offering support for a variety of neural network architectures and providing a comprehensive set of tools for optimizing performance.

Because the toolkit supports a wide range of Intel processors, Ignitarium can pick a processor and “the code just compiles and runs,” Joseph explains. At the same time, OpenVINO offers a variety of tools that help developers get the most out of their chosen processor. “We optimize using everything that the toolkit can provide us,” says Joseph.

All of these capabilities allow TYQ-i to run in various environments, from edge devices to cloud-based systems. At the edge, TYQ-i can process data in real time, providing immediate insights and allowing for quick decision-making. This is particularly useful in situations where low latency is crucial, such as detecting defects on a high-speed railway line.

For larger-scale applications, TYQ-i can also be deployed in the cloud to support vast amounts of data and perform more-complex analyses—a useful feature for monitoring extensive rail networks.

This flexibility allows it to be deployed in a wide range of scenarios, making it a highly adaptable solution for infrastructure monitoring.

The Future of Infrastructure Inspection Is Here

The challenges facing the rail inspection industry are significant. From the vast expanse of tracks to highly varied environments, the industry is in desperate need of innovative solutions. Ignitarium’s TYQ-i platform, with its blend of AI and CV technologies, offers a powerful answer to these challenges.

TYQ-I’s custom AI models, honed for high performance with minimal customer datasets, provides a solution that readily folds into existing workflows, offsetting bias against new answers to old problems. The result is a solution that is winning over track maintainers across the US.

As we look to the future, it’s clear that AI-based computer vision solutions like TYQ-i will play a crucial role in transforming the infrastructure inspection industry, delivering improved accuracy, efficiency, and safety for all.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Powering Up EV Technologies: With SECO and Imagen Energy

The electric vehicle market is growing rapidly, and with it comes a number of challenges and opportunities. On the one hand, EV technologies offer significant environmental benefits. On the other hand, there could be consequences plugging these devices into the power grid.

In this podcast, we explore the key challenges that need to be addressed before EVs can become truly mainstream. We will discuss the need for a robust charging infrastructure, the potential for EVs to revolutionize transportation, and how this technology can be safely and sustainably adopted on a large scale.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guests: SECO and Imagen Energy

Our guests this episode are Maurizio Caporali, Chief Product Officer at SECO, a developer of leading-edge solutions, and Ezana Mekonnen, Chief Technology Officer at Imagen Energy, an energy systems provider.

Podcast Topics

Maurizio and Ezana answer our questions about:

  • (2:14) The rise of, and interest in, electric vehicles
  • (6:32) How the grid can keep up with the pace of the EV market
  • (8:16) Implementing EV technologies in a safe and sustainable way
  • (14:19) How EV technologies can utilize existing infrastructure
  • (17:08) Futureproofing today’s efforts for scale and flexibility
  • (19:29) Leveraging expertise from different companies
  • (26:15) EV technology benefits from a user and business perspective

Related Content

To learn more about electric vehicles, read AI and CV Power Up the EV Charging Station Boom. For the latest innovations from SECO, follow them on Twitter at @SECO_spa  and LinkedIn, and follow Imagen Energy on LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Editorial Director of insight.tech. And today we’re talking about the rise of electric vehicles and the infrastructure that goes into making this possible. Joining us today we have Maurizio Caporali from SECO, and Ezana Mekonnen from Imagen Energy.

So, before we jump into the conversation, let’s get to know our guests a bit more. Maurizio, you’ve been on the podcast with us quite a bit, but for those who haven’t listened to those recordings I suggest you go back and check them out. But tell us more about yourself and what you do at SECO.

Maurizio Caporali: Okay, sure. I’m the Chief Product Officer of the SECO Group. We are a global company, a leader on industrial microcomputer, and I follow the life cycles of our products and the design of new products. We have an extended catalog of more than 100 off-the-shelf products.

Christina Cardoza: Great, looking forward to hearing more about that. But Ezana, welcome to the podcast. Tell us more about yourself and Imagen Energy.

Ezana Mekonnen: Yeah, thanks for having me. I am the Co-Founder and CTO of Imagen Energy. My background is power-electronics engineer. I’ve worked on various types of power converter for different applications. Here at Imagen Energy we make a compact, efficient power converters for electric vehicle–charging applications.

Christina Cardoza: Great. Of course power and energy are going to be a big part of this electric vehicle conversation. I’m sure everybody listening to this podcast, just not only in the IoT world, but electric vehicles seem to be everywhere. It’s in the news. You have businesses and governments all over giving incentives, even utilities giving incentives, to make this move towards electric vehicles.

So I wanted to start off this conversation today with you, Maurizio. If you could just talk to us about what’s driving this rise and the interest in electric vehicles.

Maurizio Caporali: Electric vehicles, as you mentioned, Christina, are a solution and products that have changed a lot actually the world of the industry of automotives. More in general is the breakdown with respect to the evolution of the combustion-automotive solution.

Now, the direction of electric vehicles changed a lot this industry in different ways. For sure, electric vehicle don’t use fossil fuels—possibly the reduction of pollution on a specific environment, for example the city where there are many vehicles, and this could be very important.

Then there are many aspects interesting for the end user. For example, driving comfort. You know, where electric vehicles are something that is very quiet, and also, from vibration point of view, it’s a change of life in some way. Less maintenance, because this kind of solution is having less maintenance on the part of the components, etc., and less failures on the movement, on the part of movement.

Another very important consideration is from the technological side, is more our part in the sense of knowledge and in the sense of SECO’s background. As you probably know, electric vehicles have a lot of technology inside. It’s a complex environment where there are computers, there are many sensors, and more than, in a general respect, the standard car, a combustion car, we have more technology.

In some ways we think about also self-driving cars more related to electric vehicles. So that is not directly considered, but more in general they are more related to electric vehicles. On the other hand, there are different interactions, no? For example, the possibility to interact with the car remotely, no? This is in particular for electric vehicles, where it gives the possibility to the end user to have all the information in the application of the smartphone, the control, the possibility to turn on the air conditioning before entering the car, to have an overview of the position, to have all the information on the car directly on the application.

The last point from a technological point of view is the improvement of the battery technology. The battery is the core of electric vehicles in some way. Then this is the last important point.

Christina Cardoza: Yeah, absolutely. A lot of great stuff in there—just the amount of technology that goes into making this happen and all the advancements that happened in the industry that is making it more accessible, more sustainable. The sustainability aspect of all of this is something particularly interesting to me, because you have the rise of electric vehicles; that means that you’re going to have more energy being taken away from the power grid more endpoints being plugged into the power grid. And at the same point, utilities have this big mission to modernize the power grid, to make sure that it’s reliable and sustainable for the future. And I can just imagine this influx of energy being plugged into it does have an impact on it, but yet we have the, like I mentioned, government regulations and these utilities giving incentives for people to move over to the electric vehicles.

So, Ezana, I’m curious what the impact is on the power grid, and how can we ensure that we’re able to continue this, keep up with this increase of electric vehicles?

Ezana Mekonnen: Yeah, absolutely. I think the impact of electric vehicles into the grid is profound. This is by way of added demand; if not managed correctly it can add a strain into the grid. But when it comes to the grid, it’s not always a demand-side problem. We’ve seen similar issues when we introduce PV solar power into the grid, where excess supply of energy caused similar strain into the grid.

So when it comes to the grid, it’s the balance of supply and demand that’s critical. And this is done through smart loads, smart grids that can better coordinate the demand and supply, and then also added storage into the system so that it can better buffer the energy coming from renewable as well as the demand needed by electric vehicles.

Christina Cardoza: That’s a great point, and going back to what Maurizio was talking about, just the technology that goes into it—there’s not only a lot of technology that goes into these cars, but when we’re talking about charging them there’s a lot of charging stations that need to happen across cities and where people live, so that it’s not only they’re charging it at home, but if they’re driving and they’re low on power they can stop somewhere and make sure that they can get to their destination safely.

I’m curious what kind of infrastructure do we have set up that we are able to have multiple different charging solutions around, and how does that have an impact on the power grid? You know, how do we do this in a safe and sustainable way, Maurizio?

Maurizio Caporali: This is the important point of the evolution of electric vehicles, because probably you understand that first we started with electric vehicles. Now we are thinking about of the infrastructure of charging stations, and all the aspects related on this critical point for electric vehicles because this is part that could be very important. And the evolution of the technology is very fast, is very rapid. And the same way the key point is to permit the growth of electric vehicles, because without the part of charging stations this change could not be possible—this evolution for this kind of solution.

Electric vehicle charging—there are different solutions. It’s called—they are defined by level, no? Level one or level two or level three. In some ways, what is our interest is on the fast charging station—the possibility to charge the vehicles in a very fast time, to give the opportunity to the end user to charge the vehicle during the trip with the possibility to do this in few minutes. And also to give the possibility to have information about the positioning, about the status of the availability of the charging station and the characteristics of the charging station with respect to the car—the communication between cars and charging stations in an open way.

To do this there are important aspects that are related to technology—technology that is not only hardware, it’s also software. It is very important to have this consideration between the hardware side and the part of control and service on top of the hardware. And we have analyzed this aspect. We have defined a solution that can work with, interact with, the physical space, with the ambient temperature, in different way. On one hand, with sensors, ambient sensors, to understand the status of the ambient. On the other hand, having an interface for the end user—the capability to give information to the end user.

This possibility for EV chargers will be very pervasive in physical space—if we talk about the highway or the city, this could be a very important point, in some way a point of interest in the sense of data. The charging station can produce a lot of data and information that can give to the end user and the citizen information about the ambient, and can be an important point of information also for municipality, also for public activities. On the other hand, give the opportunity to the cars and to the end user to talk and to have this kind of information. That is not only the way to charge the car, but it’s also a data information system.

The other important point is to manage a fleet of these charging stations and to give the possibility of avoiding the single point of failure. This is another important point, because with the change from fossil fuel to electricity, we need to guarantee the possibility of charging the car to the end user. It is very important to have a solution that is very strong from this point of view, to give the possibility to manage and to understand if there are critical points on the network, if there are critical points on a single charging station, and to have all this information ready and available immediately. Also with predictive analysis information that arrives from the status of the entire fleet of charging stations.

On the other hand, there is the possibility to control the status of the vehicle, or if there are vehicles in front of the charge station to alert the user when the car is ready. All this information can be done also with the open-standard protocol that is available for these kind of solutions.

Christina Cardoza: I’m just curious, Maurizio, because you at SECO, you guys developed the CLEA electric vehicle charging station. So, what does it take to get these charging stations installed on highways or within cities? Do you have to build the infrastructure from the ground up? Or do these cities have existing infrastructure that you’re able to build on?

Maurizio Caporali: The important aspect that we have analyzed in this market is the flexibility of the solution, then we give the opportunity to customize the last level of the solution for the company. They need to install the solution with specific functionality. This is in some ways typical, our characteristic, to define something that is very flexible and very modular, then give the opportunity to have a customizable solution.

On the other hand, the other important aspect is to make this part available from the hardware point of view and also from the software point of view. Then to give a set of tools to define the right service and right solution for different levels of user. Because there are the parts that manage the maintenance of the infrastructure, and there is the marketing side. There is the possibility to manage remotely the information, the pricing, the advertising system, and also to give the opportunity to the end user to have all the information about the status about the availability directly with the application on the smartphone.

Then our solution gives all this opportunity in the sense of a library, SDK, to develop the single application for different levels of customer. On the other hand, from our point of view, to give the opportunity to add a large screen for information, to add a payment system, to add many sensors that can enable a different level of services depending on the place where the device will be installed.

Christina Cardoza: Great. Now I’m wondering, Ezana, from your point of view, you were talking about the way that you have just seen the power grid and this evolution with power consumption evolve over the last few years when we have solar and new things coming up. So I’m curious—all of this sounds great, and it’s initiatives that we’re doing today, but we want to make sure that we can support electrical vehicles for a long time to come—this seems to be the way that we’re moving towards in the future. So how do we ensure that all the efforts we’re doing today continue to scale, continue to be flexible, and we can continue to evolve and modernize as the demand and the increase evolves?

Ezana Mekonnen: Yeah, absolutely. I think there’s two parts to that. You know, the first one, it’s very crucial that we have a long-term view of what we’re actually deploying. This is the future infrastructure, right? And, for instance, we are developing our charger to be bi-directional, not because it’s needed now, because EV drivers right now, they just want to make sure they’ll be able to charge the car. But making sure that the infrastructure is in place for not only charging a vehicle but also being able to pull the energy back to the grid. And this will turn EV from being a liability to the grid into an asset for the grid with essentially a battery on wheel, right?

So, the second aspect is that EV charging owners worry about what they call “stranded asset,” where they have a charging station and it doesn’t get utilization, doesn’t get enough usage. So we have an architecture that can allow us to deploy a charging station and then add a charging port to it as the utilization goes up. And so this will help keep up the infrastructure needed with the adaption of electric vehicles, and it can continue to grow.

Christina Cardoza: Great. And, you know, another point here is that there’s—obviously we have the power aspect, we have the charging aspect, we have the electric vehicles themselves. There’s so much that goes into making this happen that I think it’s obvious that no one company can do this alone. It really takes support from the entire industry, and one of the reasons why we had Imagen and SECO both join this podcast is because I know there’s even a partnership between you two as well as Intel. I should mention the IoT Chat and insight.tech, we are Intel sponsored. But I’m curious what the relationship is between Intel, SECO, and Imagen. What is the technology and the expertise that you guys are all leveraging from each other? Ezana, I’ll throw that one at you first.

Ezana Mekonnen: We realized these chargers won’t be just a charger. They’re multifunctional units, similar to how our camera—our phones are not just a phone, but a camera, GPS system, and more. So while we focused on making a compact converter that’s sufficient for the power conversion and the delivery, we look to SECO for the added functionality such as their CLEA AI, their capability on image processing, audio processing, and then being able to drive a large screen for advertisement which could potentially either offset cost of charging or provide functionality, not just for the EV charger user but also the business around it.

So we think having this infrastructure out there that is capable of a lot of processing capability could evolve to something else beyond just charging. And Intel has been just great in terms of the technology that they’re offering us—specifically an FPGA, which is what we’re using for our power conversion, a very reliable and robust method of developing power conversion, especially as we try to make it efficient and extremely compact. And we believe that it takes more than any one company to develop this future infrastructure, and we’re happy that we’re working with SECO and Intel.

Christina Cardoza: Great. And, Maurizio, I’m curious from your end how you’re leveraging Imagen and Intel. Ezana spoke about FPGAs—I know that’s really important for the security aspect of all of this too. Something you mentioned was fleet management and remote management. So, I’m wondering what other technologies from Intel or Imagen go into your EV charging solution, or the value of these partnerships between all these companies?

Maurizio Caporali: Yes, the core part of our technology is based on an Intel chip. In particular, we are using the last generation of industrial solution of Intel—low power consumption that is based on the Atom® series processor. These processors are very flexible, very powerful, with very, very low power consumption. This is another important point—with the possibility to analyze a lot of complex data that is also coming from different kind of sensors, all this data can be analyzed in real time; that gives the opportunity also to not send all the information, all the data, to the cloud, but it can be pre-analyzed directly on the device, on the edge device, and give only the information, the alert, to the control room. And this is possible thanks to the technology also of OpenVINO, based on the artificial intelligence model optimization of the SDK, compatible with all the Intel solutions.

On the other hand, this kind of solution has industrial-grade efficiency in the sense of temperature, and also the long-life fundamental for this kind of architecture that gives the opportunity to maintain this solution for more than 10 years. This is also very interesting. On the other hand, as I mentioned before, the possibility to define this solution as a modular solution and the possibility to have a series of interfaces and IOs. For example, we have a direct connection with the electronics of Imagen Energy to exchange the data, the information, between the two computers, the two systems, in the right way, in the perfect way.

This has given us also the opportunity in this collaboration—starting from the solution of Imagen that is more related to the power efficiency of the energy conversion; and our solution, that is to manage all data on top of the creation of energy, of the interfacing with the current, the infrastructure, and also the managing of all human interface based on a big screen that can be managed, also a 4K big screen, for all the information for the end user. And the connectivity that could be mobile, Wi-Fi, Bluetooth, to have all the communication between the charging station and the vehicles and the end user—this maximum flexibility from the Intel solution in this way.

Christina Cardoza: So, lots of great technology and partnerships working together to make this all happen. And when you talk about all the technology that is in these vehicles or these charging solutions, I’m sure that Intel processor really is just helping to make sure the performance is high quality and that the speed gets there, and that, like you mentioned, low power consumption so it’s not overcharging anything. So this is all great news for electric vehicles.

I’m curious, Maurizio, you had it sprinkled in there a couple times in the conversation, but if we can expand on beyond sustainability. Because that’s one of the big driving forces of the move to electric vehicles—just the sustainability benefits it’s going to bring to society. But I’m curious, from a user and business perspective, what are the benefits that they’re going to get moving to this new transportation model and vehicle?

Maurizio Caporali: Yes, for sure, as mentioned before, this is a very important point for the evolution of the technology, no? The technology that could be related to the communication between machine to machine, the possibility to manage the energy network in the right way. We need to have data information and the possibility to communicate with the energy network and the internet network the same way. Plus, all the sensors, all the devices, electric vehicles, that are around in the cities.

Technology innovation also for security reasons. The possibility to have information about the nearby EV charger that can come from different kinds of proximity sensors or camera sensors, that can give to the municipality and police important data. This is could be an important aspect of the possibility to have open system, to give new kinds of services. Otherwise, if the system will be closed, this could be difficult. Now the technology give us the opportunity to analyze all this information, all this data, that can be sent and directed to different kinds of user and companies.

Christina Cardoza: This has been a great conversation. I can’t wait to see where else the electric vehicle industry is going. I think in this conversation we’ve only scratched the surface of what is involved and what goes into this. And I think it’s just the beginning for this EV landscape. So, lots to look forward to. Unfortunately, we are running out of time in our conversation today. So before we go I just want to throw it back to each of you. Any final thoughts or key takeaways you want to leave our listeners with today? Ezana, we’ll start with you.

Ezana Mekonnen: This is just an exciting time—this big, big revolution happening with the conversion of transportation into electric. And it can bring about a lot of new opportunity, new markets, and a more sustainable future. So this is an exciting time.

Christina Cardoza: Great. And, Maurizio, anything you want to leave us with today?

Maurizio Caporali: Yes, the change will happen soon. And this can be very important for the new possibilities that are related to the possibility of interaction between the end user and the environment in the right way. And the possibility to use in the right way the electric vehicles for a new generation also of EV charging more efficiently, and more simple and more smart for the next future of charging and traveling away.

Christina Cardoza: Great. Well, I will be watching to see what else comes out of your respective companies, as well as the partnership that you guys have—SECO, Imagen, and Intel—what else grows out of there in the future, because I know there’s still lots to come. But just want to thank you both again for the insightful and informative conversation. And thanks to our listeners for tuning in. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Video Security as a Service Streamlines Deployments

Traditional on-premises video monitoring solutions require substantial computing power and local infrastructure, making them a heavy lift for SMBs and companies managing multiple sites. Cost-efficient and easy to manage, open cloud Video Security as a Service (VSaaS) solutions offer an alternative.

With its technology leadership in on-premises video monitoring, Milestone—a global provider of open IP video monitoring software—does exactly this. Milestone Kite, launched in January 2023, is a scalable open-cloud VSaaS solution that leverages the company’s XProtect video management software (VMS) product family.

“The value of a cloud-based service is that it’s simple, easy to use, and easy to live with,” says Jan Lindeberg, Senior Product Manager at Milestone. “Organizations that don’t have the on-site IT muscle to invest in the effort, care, and cost in maintaining an on-premises system will find that investment to be much less with Kite.”

Focus on What’s Important with Cloud-Based VSaaS

A low-cost VSaaS that’s easy to install, Kite provides the flexibility that small and multi-site companies often need. Many smaller to midsize organizations have a challenge attracting the skilled IT personnel required to run and administer on-premises applications and systems in a professional and secure way. Kite, and other cloud-based SaaS offerings, speak directly to these types of users, who prefer to focus on the day-to-day core business, rather than administering different applications and complex IT infrastructure.

As an open platform that supports thousands of different cameras and IoT devices, Kite provides a safe path for end users to move to the cloud without replacing their existing cameras or camera network. This not only makes the cloud journey instant and risk-free but also saves the cost of purchasing and installing new cameras.

Hybrid Cloud Approach

Aside from cameras, the Milestone Kite Gateway, a small form factor internet appliance, is the only hardware needed on-site. The Intel-powered gateway has the performance needed to also run real-time analytics at the edge—offering a hybrid solution. The gateway connects all devices together, making it simple to expand as needs change.

“The solution allows you to take a hybrid approach, with intelligence and storage distributed across edge devices, appliances, and the cloud itself,” Lindeberg says. “You can extract metadata at the edge and pass it on to the cloud for processing.”

Customers have the option to either store video data at the edge or in the cloud, depending on their preferences and the availability of stable cloud connectivity and bandwidth at the deployment site. Kite can also be combined with Milestone’s on-premises solution for companies wanting to expand coverage to other sites with minimal effort and costs.

Unlike some other video security solutions, Kite has an open environment—meaning it’s compatible with more than 6,000 different cameras from various brands. Cameras can be virtually connected directly to the cloud using drivers, or connect via the edge gateway. Users access a customizable dashboard via web or mobile app, where they can centrally review video feeds and recordings, manage settings, and apply AI functions.

Based on Google Cloud, the solution benefits from access to Google data centers around the world and a security framework, which handles physical security and data security. Personal privacy is also an essential part of the Kite platform. “We have paid a lot of attention to how we treat personal data, everything from the end customer signing into our business systems to how we actually manage video data in the system itself,” says Lindeberg.

And with its ease of implementation—the system can be up and running in half an hour or less—Kite also creates new opportunities for Milestone’s global channel of systems integrators. SIs benefit from the speed and ease of implementation and the freedom for their end customers to easily manage the solution on their own.

Emerging #technologies such as #cloud connectivity, more powerful #EdgeComputing, and #AI are expanding the value of #security video. @milestonesys via @insightdottech

The Future of Video Security

Emerging technologies such as cloud connectivity, more powerful edge computing, and AI are expanding the value of security video. The faster companies can interpret their video security feeds, the more decision-making power they gain. “You can glean insights about buyer behavior, find ways to automate manual tasks, and proactively act on video data to take instant action in a security situation,” says Lindeberg.

The VSaaS model is only beginning to leverage the power of cloud-based AI and enable new use cases. For example, the built-in Forensic Search capability reduces the effort and time required to find video recordings covering a specific incident. AI-generated metadata makes it easy to find specific types of objects, with the possibility to refine searches using color filters.

“Until recently, building AI applications was very exclusive because everything had to be constructed from the ground up,” Lindeberg says. “But now, AI applications are making their way into the industry big time—being deployed both on the server edge and device edge. That removes friction while bringing additional value for the end-user. And at the same time, the compute platforms needed are becoming more affordable, creating a perfect environment for AI video innovation.”

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Edge AI Platforms Change the Game at Entertainment Venues

AIoT platforms—all-in-one solutions that leverage edge AI and IoT technologies—help the hospitality industry reimagine the guest experience. Powered by next-generation processors, computer vision, video cameras, and sensors, these tools are becoming available at just the right time.

Organizations are under continual pressure to improve efficiency and find new ways to boost sales. Perhaps just as important, customers expect frictionless technology. Guests today have no patience for cumbersome digital experiences at sports stadiums, event centers, and other venues.

But there can be implementation barriers for hospitality businesses interested in transforming their products and services—mostly due to complexity and cost.

“Consider something as basic as an IoT camera system,” says Joe Weiss, Worldwide Leader for IoT at Cisco Systems, maker of the Meraki Wireless Solution. “You need Internet connectivity, power distribution, network infrastructure, cameras, severs, software for video management and analytics, and storage for all of that HD video data. For a business to assemble that technology stack independently is complex, expensive, and very difficult to scale.”

The Meraki platform overcomes these obstacles by providing a unified, plug-and-play system that helps end users solve business problems without getting bogged down in disparate technologies. That simplicity is revolutionizing the hospitality industry—not just for guests and businesses but systems integrators (SIs) and solutions providers as well.

Reimagining the Stadium Experience

The flagship example is SoFi Stadium, a state-of-the-art multiuse sports and entertainment venue in Los Angeles. From the outset, stakeholders designed the entire venue around advanced digital technology to create an extraordinary spectator experience.

In partnership with Cisco, SoFi built a smart stadium with a broad spectrum of guest services—from locating parking, concession stands, and bathrooms to cashless ticketing and other purchases. And it created a rich, immersive digital experience with 4K video on more than 2,600 VisionEdge screens, powered by Wipro and the Cisco IP Fabric for Media solution. The solution also helps transform broadcast operations and content delivery, leverages broadcast solutions, and much more.

Cisco’s work on SoFi Stadium exemplifies the true significance of seamless connectivity, #ComputerVision, and #AI in hospitality: using powerful technology to deliver business outcomes. @Cisco via @insightdottech

Working with the stadium’s designers and builders, the Meraki Wireless Solution was an essential element of the groundbreaking deployment—including the ability to securely provide connectivity to 70,000 fans. The versatility and comprehensiveness of the platform was essential. It meant shorter development and setup times. And it meant everything could be integrated seamlessly and run on a single unified network (Video 1).

Video 1. Edge AI platforms provide new ways to enjoy the live sport experience. (Source: Cisco)

Cisco’s work on SoFi Stadium exemplifies the true significance of seamless connectivity, computer vision, and AI in hospitality: using powerful technology to deliver business outcomes.

Moreover, the flexibility of AI platforms means they have use cases that go well beyond any single vertical or sector—which Weiss says is part of the strategy: “We aim to build rock-solid, enterprise-grade hardware that lets our ecosystem of partners develop custom applications to run on our servers, cameras, and appliances—delivering value bespoke to the industries they serve.”

Supporting Workloads of All Sizes

Meraki integrates Cisco networking devices, IoT cameras, and sensor hardware—supporting complex vision-at-the-edge workloads. The result is a stable, high-performance AIoT wired and wireless platform that works well for organizations of any size.

Basic use cases are appropriate for smaller businesses: for example, a small hotel that wants stable, easy-to-configure Wi-Fi for its guests. At the other end of the spectrum, Meraki can help the hospitality sector solve extraordinarily complex problems in the largest venues.

In this regard, Cisco’s partnership with Intel has been crucial:

“Some workloads are far too complex to process in a camera—there just isn’t enough compute power,” says Weiss. “But Intel builds great processors that are ideal for computer vision and video analytics at the edge. By combining Intel processors with our technology, we’re able to say ‘yes’ to any of our customers, no matter how complex their needs are.”

New Opportunities for Partners and SIs

The partner ecosystem that Weiss emphasizes is marked, above all, by reciprocity. Solutions providers and systems integrators add massive value to AI platforms. But the converse is also true, because these solutions are rapidly transforming the marketplace in which technology partners and SIs operate.

For AI specialists, there is the opportunity to design custom software that runs on solutions like Meraki. For example, WaitTime, a provider of real-time crowd intelligence software, has built an entire business around AI crowd data analytics—an accomplishment that simply wouldn’t be possible without a robust, underlying foundation.

For systems integrators, the situation is perhaps more complex but no less exciting. “Instead of worrying about how to connect the bits, SIs can work on solving problems by bringing computer vision outcomes to their customers,” says Weiss. “Vision-as-a-service is hot because it solves real business problems. SIs today can say, ‘How are you counting guests in your venue? How are you optimizing your uniform compliance? These problems can be solved with smart cameras and we, as an integrator, can help you do that.’”

The ecosystem of hardware manufacturers, platform providers, software developers, and SIs represents a kind of positive feedback loop where innovation begets more innovation. The next five years should be a tremendous time for everyone in the industry.

“I’m very excited to see this technology mature,” says Weiss. “As edge-based smart devices become even more capable, it’s going to be easier to deliver AI outcomes at scale—and to have performant, complex solutions stitched together to where it just feels like magic.”

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

This article was originally published on June 28, 2023.

SR-IOV: Unleashing the Power of GPUs in IIoT

The graphics processing unit (GPU) has become a vital resource for the Industrial Internet of Things (IIoT). According to Waterball Liu, a Solution Product Manager at embedded solutions provider DFI, any application that requires highly intensive computing—such as machine vision processing, data analytics, or machine learning applications—can benefit from GPUs.

But the need for performance is particularly intense for multifunction platforms that combine demanding workloads like those mentioned above in a single system. Add graphical displays to the mix—a common feature in these consolidated systems—and GPUs become even more critical.

The challenge then becomes how to share the GPU.

Modern IIoT systems typically combine workloads using virtualization or containers, but both techniques create a performance-limiting bottleneck at the GPU level. It all comes down to complexity. Over time, virtualization technology has gradually expanded to include memory, I/O devices, networking, and storage—but not all hardware components can be easily virtualized.

Graphics technology is the best example: The high complexity of modern 3D rendering pipelines, the lack of unified instruction set standards among GPUs from different manufacturers, and the highly programmable 3D application programming interface make GPU drivers akin to high-level language compilers, which also increases the technical requirements of GPU virtualization.

For Industrial IoT platform developers whose primary objective is to get more out of a single system at the lowest cost and resource utilization, these hurdles often make building workload consolidated systems around GPU technology more trouble than it’s worth.

The Role of SR-IOV in GPU Virtualization

But if there’s ever a problem without a solution, technology will surely find a way.

For instance, the PCIe standard Single-Root I/O Virtualization (SR-IOV) defines a method for sharing a physical device by partitioning it into virtual functions. SR-IOV has already been widely used to virtualize network adapters, meaning it provides a programming model that is well understood and thoroughly field tested.

But when applied in a new context to a GPU, the technology gives each virtual machine (VM) or container access to a graphics function with near-native performance.

“SR-IOV reduces the overhead for virtualized environments,” explains Liu. “It makes nearly 100% of the GPU power available for virtualized applications.”

This simplified approach to #GPU acceleration opens up new possibilities for optimizing #industrial workloads. @DFI_Embedded via @insightdottech

SR-IOV In Action: A Performance Game Changer

In the past, the high-performance GPUs needed for industrial workload consolidation were available only as expensive discrete chips, which introduced unwanted cost to industrial systems. But today’s mainstream processors, like 12th Gen Intel® Core embedded processors, offer SR-IOV in its integrated graphics engines with considerable performance. “The integrated GPUs in Intel® processors provide a cost-effective and reliable solution for industrial computing, eliminating the need for additional discrete GPUs,” notes Liu.

This simplified approach to GPU acceleration opens up new possibilities for optimizing industrial workloads. Liu points to inspection systems as an example. “These systems require substantial computing power for AI-related tasks, such as defect detection and image recognition. With integrated graphics and SR-IOV, these systems can efficiently execute these applications with minimal system complexity,” he says.

A 12th Gen Core processor outfitted with SR-IOV, for example, can support up to four independent displays and seven virtualized functions. Figure 1 illustrates how these capabilities can be accessed by up to seven VMs independently.

Intel® Graphics SR-IOV enables efficient sharing of the GPU.
Figure 1. Intel® Graphics SR-IOV enables efficient sharing of the GPU. (Source: Intel)

When it comes to real-world applications, the impact of SR-IOV is nothing short of remarkable, Liu explains. As the first company to validate SR-IOV on Intel processors with integrated graphics, DFI demonstrated the performance uptick by running two virtualized Windows 10 operating systems—one with SR-IOV and one without—on its ADS310 microATX board.

In the proof of concept, a video file was streamed from local storage through the two OSs and on to remote displays over Wi-Fi and 100 Mbps HDBaseT Ethernet. The installation without SR-IOV exhibited graphics throughput of roughly 28 fps, while the one equipped with it performed at 60 fps—a common target for smooth graphics rendering.

Of course, the performance augmentation brought about by SR-IOV is not confined to video streaming; the technology can be applied to a plethora of Artificial Intelligence of Things (AIoT) workloads in an industrial context. For example, the technology is at the heart of the DFI virtualized industrial automation and retail solutions.

“You now only need one computer to output to many screens,” Liu explains. “Imagine an industrial product line, where every manufacturing stage has its own display. The displays at every stage may only run temporarily.”

“In the past, such applications needed many computers or one computer with much more powerful and expensive discrete graphics adapter cards,” he continues. “But now, one Intel embedded processor can be used.”

A Promising Future for Efficient AIoT

Intel® Graphics SR-IOV is shaping up to be a potential game changer in industrial automation and AIoT applications. By enabling high-performance applications to run efficiently on integrated GPUs, it’s opening new avenues for efficiency and capabilities.

The potential benefits for AI are particularly enticing. “There are more and more AI applications that need powerful computing,” Liu states. “The applications are more and more complicated, with different functions. So, the new generation of processors and graphics will provide a more flexible and powerful solution for AI and other application needs.”

“With SR-IOV, we will open up a new chapter in IIoT development,” he concludes.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

AI and CV Power Up the EV Charging Station Boom

As electric vehicle adoption grows, owners clamor for more charging stations, and governments across North America and Europe—eager to fulfill sustainability goals—race to provide them. The U.S. alone will spend $5 billion to create a national network of over 500,000 public chargers by 2030.

Charging station solution providers are also gearing up for the coming expansion, incorporating technology to make management easier and operations more profitable for the EV station owners they serve.

Though it may be hard to imagine now, charging stations may soon compete for customers as fiercely as gas stations do today. Those with glitchy, unreliable technology could lose market share to others with better service and appealing deals. By incorporating features such as modernized payments, fast equipment repairs, and targeted advertising, charging station solution providers can help station owners draw more business and develop new sources of revenue. And they can ensure stations continue to thrive by providing technology that’s easy to use, allowing owners to develop other innovative services to suit future needs.

Tailor-Made EV Charging Station Management

For a complete charging station solution, providers need two basic types of technology: infrastructure—including chargers, inverters, and energy storage systems—and technology to run customer-facing systems and perform analytics. Because solution providers sell to a variety of station owners in different regions, they must be flexible when assembling their charging station packages.

“Companies entering this market want a ready solution, but they also need to adapt to the needs of their customers,” says Maurizio Caporali, Chief Product Officer at SECO, a developer of leading-edge solutions—from miniaturized computers and fully integrated systems, to AI/IoT software for many industries.

Providers can offer customers a broad menu of customer-related services with SECO’s CLEA EV Charging Station solution, which contains customizable modules for everything from managing payments and repairing charging equipment to analyzing sales and setting up advertising.

SECO also partners with charging infrastructure producer Imagen Energy. This provides an option for solution providers to purchase an entire “white label” charging station solution from SECO and Imagen, rebranding it as their own. Alternatively, they can use any or all of the CLEA AI EV Charging Station modules with the infrastructure partner of their choice.

For #EV station owners, providing a user-friendly charging experience is critical. Solution providers can help by enabling remote, edge #AI-based charger maintenance to keep chargers up and running. @SECO_spa via @insightdottech

Enhancing the Customer Experience With Edge AI

For EV station owners, providing a user-friendly charging experience is critical. Solution providers can help by enabling remote, edge AI-based charger maintenance to keep chargers up and running. The CLEA platform can collect data from the inverter’s controller, sending a warning to control room technicians if a serious problem occurs—for example, if an overheated inverter threatens to shut down charging. Technicians can then implement a quick remote fix, allowing customers to avoid frustrating malfunctions.

By analyzing performance data over time, station owners can predict when equipment is likely to fail and schedule maintenance or part replacements in advance to avoid downtime. They can also automate software updates across stations, saving hours of time.

Solution providers can also help station owners select the right payment options for their markets. For example, the CLEA mobile app allows customers to reserve a charging time, and make payments. Alternatively, CLEA works with other mobile payments systems, such as Apple Pay, Google Pay, or Venmo. Providers can also implement contactless payments, which allow customers to hold a credit or debit card next to a card reader to complete transactions, saving time and avoiding the errors that can occur with swiping.

Growing Revenue Through Digital Displays

Solution providers can give station owners a competitive edge with digital advertising, which creates a new source of income. Station owners can then expand revenue options by developing their own partner networks.

On the CLEA solution’s eye-catching 32-inch digital screen, station owners can either display traditional ads or use computer vision cameras to capture anonymized demographic information about customers, including their approximate age and mood. This information is sent to CLEA’s cloud-based analytical platform, where station owners can create and manage real-time promotional campaigns on digital-display dashboards.

“From one platform, owners can create multiple campaigns for different locations,” Caporali says.

For example, one station might flash a promotion for breakfast doughnuts at an adjacent coffee shop, while another might offer a deal on fries at a quick-serve restaurant down the road, or a discount on a carton of milk at a local grocery store.

Station owners can also analyze charging sales data on the platform and optimize prices at different locations, offering discounts on-the-fly to boost demand.

The electronics of the CLEA platform charging kit are powered by Intel processors for high performance and low power consumption. And the computer vision system uses the Intel® OpenVINO toolkit, which speeds up the development of additional AI capabilities.

The Future of EV Charging

As EVs multiply, solution providers may find more ways to help station owners. One possibility is selling data about electricity use to city administrators, who could use it to optimize energy planning and consumption. Cities could also use charging stations’ digital screens to issue warnings about traffic congestion and emergencies.

“Charging stations could become an information hub for creating smart roads and smart cities. We are still in early stages, and the potential is enormous,” Caporali says.

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Evangelizing AI: The Key to Accelerating Developers’ Success

AI increasingly is built into mission-critical applications—defect detection in manufacturing, customer-behavior analysis in retail, even traffic detection in smart cities. AI powers it all. But to make these capabilities possible—to train AI models—and to really translate them into business values, can take a lot of time and effort.

Fortunately for developers, year after year Intel keeps making advances that render AI more accessible. This year it celebrates the fifth anniversary of the OpenVINO toolkit, as well as the release of OpenVINO 2023.0. And it just released the Intel® Geti solution, specifically designed to make it easier for developers to work with the business side of the equation. Yury Gorbachev, OpenVINO Architect at Intel, and Raymond Lo, AI Software Evangelist at Intel, tell us all about it (Video 1).

Video 1. Intel’s Yury Gorbachev and Raymond Lo discuss the evolution of AI and the role OpenVINO continues to play. (Source: insight.tech)

What recent trends have you seen in the progress of AI?

Yury Gorbachev: This is the mainstream now. Quite a lot of use cases are being solved through AI—customer monitoring, road monitoring, security, checking of patient health—all of those things are already in the main line.

But I think what we are seeing now in the past year is a dramatic change in how AI is perceived and what it is capable of solving. I’m talking about generative AI, and the popularity that we are seeing now with ChatGPT, Stable Diffusion, and all those models. We are seeing image generation. We are seeing video generation. We are seeing video enhancements. We are seeing text generation. All of those things are evolving very rapidly right now. If we look back 10 years or so, when there was an explosion in adoption of deep learning, now the same thing is happening with the generative AI.

What can you tell us about developer advancements?

Raymond Lo: To work with developers, I have to be a developer myself. Maybe 10, 12 years ago I built my first neural network with my team. I was trying to figure out how to track a fingertip—just making sure that my camera could understand what I was doing in front of it. It took us three months just to understand how to train the first model. Today, if I give it to Yury, two days later maybe it’s all done. But at that time, building just a very simple neural network took me forever.

Of course, it worked in the end; I learned how it works. But through many years of evolution, the frameworks are available; TensorFlow and PyTorch are so much easier to use. Back then I was computing on my own C++ program. Pretty hard core, right? Today they have OpenVINO.

Today when I talk to developers in the community, it’s OpenML, GPT—everything is in there. You don’t have to worry as much, because when you made a mistake, guess what? Ba boom—it will not run anymore, or it’ll give you the wrong results. What is valuable today is that I have a set of tools and resources, so that when people ask me, I can give them a quick and validated answer. Today, at Intel, we are giving people this validated tool.

How do you work with developers in building these types of solutions?

Raymond Lo: As I speak with young developers, I listen, right? “What do you need to make something run the way that you need it to?” Let’s say, hypothetically speaking, someone is trying to put a camera setup in a shopping mall. They need to think about privacy; they need to think about heat, if they’re running it on a very power-hungry device and they want to hide it. Some use cases require a very unique system. The users want it to be in a factory and they want it to be on the edge. They don’t want to upload this data; they want to make sure everything happens on-site.

So we think about portfolio, and that’s what Intel has. The more we work with our customers, I think we are trying to collect these kinds of use cases together and create these packages of solutions for them. But I don’t need ultra-expensive supercomputers to do inference.

Yury Gorbachev: I think you’re totally right. The most undervalued platform, I would say, is something that you have on your desk. Most developers actually use laptops, use desktops, that are powered by Intel. And OpenVINO is capable of running on them and delivering quite good AI performance for the scenarios that we are talking about. You don’t need to have a data center to process your video, to perform style transfer, to detect vehicles, to detect people. That’s something we’ve been trying to show to our customers, to developers, for years.

From the business standpoint, the exact same platform runs in the cameras and the video-processing devices and things like that. And it all starts with the very basic laptops that each and every developer has.

“What we are seeing now in the past year is a dramatic change in how #AI is perceived and what it is capable of solving.” – Yury Gorbachev, @intel via @insightdottech

How have you seen OpenVINO advance over the past couple of years?

Yury Gorbachev: Originally we started by developing OpenCV. So we borrowed a lot from OpenCV paradigms, and we borrowed a lot from OpenCV philosophy. On OpenCV we were dealing a lot with computer vision, so that’s why initially we were dealing with computer-vision use cases with OpenVINO. And then we started to develop this open-source toolkit to deploy AI models as well.

Then, as years passed, we saw the growth of TensorFlow, we saw the explosiveness of PyTorch. So we had to follow this trend. We’ve seen the evolution of scenarios like close-image classification, then object detection, segmentation. We initially made just runtime; then we started working on the optimization tools, and eventually we added training-time optimization tools.

So, initially we started with computer vision, but then a huge explosiveness happened in the NLP space, the text-processing space. So we had to change how we processed the inferences in our APIs quite a lot; we changed a lot in our ecosystem to support those use cases. And now we are seeing the evolution of, as I mentioned, generative AI, image generation, video generation. So we adapt to those as well.

We work a lot with the partners; we work a lot across the teams to power those technologies to always have the best-performing framework on Intel. We were looking recently at how regularly we evolved generation over generation, and it wasn’t like 5%, or 10%—sometimes it was two times, three times better than the generations before.

Can you talk about how OpenVINO and Intel® Geti work together?

Raymond Lo: It’s really about having a problem statement that you want to solve. Geti fills in the training gap in between—where you can provide a set of data that you want the algorithm to recognize. It can be a defect, it can be sort of like a classification of a model or of an object. Today we provide that interface; we provide people the tool. And then also the tool has these fine-tuning parameters; you can really figure out how you want to train it.

You can even put it with the data set, so that every time you train it, you can annotate it. We call it an active-learning approach. After you give it enough examples, the AI will figure out the rest of it for you. So that’s what Geti is really about. Now you have ways and ways to tackle this problem—getting a model that is deployable on OpenVINO. 

What do you envision for the future of AI?

Yury Gorbachev: It’s hard to really predict what will happen in a year, what potential scenarios will be possible through the AI. But one thing I can say for sure: I think we can be fully confident that all of those scenarios, all of those use cases that we are seeing now with generative AI—the image generation, video, text, chatbots, personal assistants, things like that—those things will all be running on the edge at some point. Mostly because there is a desire to have those things on the edge.

There is a desire to, say, edit documents locally; to have a conversation with your own personal assistant without sending your request to the cloud, to have a little bit of privacy. At the same time, you want to do this fast, and doing things on the edge is usually faster than doing them on the cloud. This is where OpenVINO will play a huge role, because we will be trying to power these things on a regular laptop.

Initially, that performance on the laptops will not be enough. Obviously initially there will be some trade-offs in terms of optimizations versus what performance you will reach. But eventually the desire will be so high that laptops will have to adapt.

Raymond Lo: Like Yury says, it’s very hard to model something today because of the speed of change. But there’s something I can always model: Anytime there’s a successful technology, there’s always an adoption curve, right? It’s called a bound-to-happen trend. “Bound to happen” means everyone will understand what it is. In this 2023 OpenVINO release we hit a million downloads. That is a very important number. It represents that the market is adopting this—rather than something that is great to have, but then no one revisits it.

I can tell you, a year from today we will have better AI. 

What is significant about OpenVINO’s five-year anniversary and the latest release?

Yury Gorbachev: In this release there are continuous improvements in terms of performance. We are working on generative AI—we’re improving generative-AI performance on multiple platforms. But most noticeably we are starting to support dynamic shapes on GPU. We’ve done a lot of work to make it possible to run quite a lot of text-processing scenarios on the GPU, which include integrated GPU and discrete GPU. We’re looking at capabilities like chats, and even they will be running on integrated GPU, I think. There is still some work we need to do in terms of improving performance and things like that. But in general the things that were not entirely possible before now will be possible.

The second major thing—we are streamlining a little bit our quantization and our model-optimization experience. We are making one tool that does everything, and it does this through the Python API, which is more data science-person friendly. And one feature that I would probably say is a little bit of a preview at this point is that we are starting to support PyTorch models, to convert PyTorch models directly. It’s not production ready, but the team is very excited about this.

Related Content

To learn more about AI development, listen to Accelerating Developers’ AI Success with OpenVINO and read Development Tools Put AI to Work Across Industries. To learn more about the latest release of OpenVINO, visit https://openvino.ai/. For the latest innovations from Intel, follow them on Twitter and LinkedIn.

 

This article was edited by Erin Noble, copy editor.

Smart Classroom Solutions Advance Learning Outcomes

Schools everywhere are looking for ways to boost student engagement, enhance teaching quality, and improve learning outcomes. Smart classrooms—suites of networked digital devices, collaboration tools, and learning software—are powerful tools for reaching these goals.

But despite its efficacy, smart classroom technology can be challenging to implement. “For many schools, existing digital learning systems are prohibitively expensive—and their complexity means there’s a steep technological learning curve for teachers,” says Amanda Lin, Marketing Manager at Elitegroup Computer Systems, a hardware designer and manufacturer. “In areas with limited internet connectivity, it’s also difficult to maintain a stable network to operate such solutions.”

But a new wave of smart classroom technology can help overcome these obstacles. Based on performant low-power processors and edge-based design principles, simple, cost-effective smart classroom kits deliver the benefits of digital learning in almost any setting.

Smart Classroom Solutions at Work

The Elitegroup deployment in Kenya is a case in point.

The government of Kenya was launching a digital literacy program that aimed to put information and communications technology (ICT) in every public primary school in the nation. It was a vast undertaking—complicated by the fact that many schools in Kenya are in rural areas with limited network connectivity and nonexistent IT resources.

Elitegroup was selected as a technology partner to help digitize classrooms in a large number of schools—in part because the company’s ECS Smart Classroom kit provides a turnkey solution that can work with or without an internet connection.

The heart of the Elitegroup solution includes a hardware device called a “content management access point” (CMAP), which acts as wireless access point and fileserver that stores educational content and connects digital devices in the classroom. With the ability to network up to 50 devices, the CMAP can be used to share the wireless internet connection or set up a local classroom intranet if a connection is unavailable (Video 1).

Video 1. Smart classroom solutions enable interactive learning experiences and better classroom management. (Source: COE GP)

Working with the Kenyan government and local partners, Elitegroup helped equip 13,500 classrooms with #SmartClassroom #technology, facilitated training for 30,000 teachers, and impacted 695,000 students. @ECS_GlobalHQ via @insightdottech

Working with the Kenyan government and local partners, Elitegroup helped equip 13,500 classrooms with smart classroom technology, facilitated training for 30,000 teachers, and impacted 695,000 students. During the main implementation phase of the project, Elitegroup and its partners set up an average of 500 classrooms per week.

This was clearly a team effort, but the work was also simplified by the kit’s technology stack—designed to enable plug-and-play setup, and stable, no-fuss classroom networks. Lin credits Elitegroup’s technology partnership with Intel as a significant factor in making the kit so user-friendly: “Intel processors and Wi-Fi modules provide cost-effective computing power and stable connectivity. And the Intel CPU in the CMAP provides the high-efficiency, low-power processing that’s needed to get through an entire school day.”

Bridging Digital and Economic Divides

The most readily apparent benefits of low-cost, easy-to-deploy smart classroom kits are the educational ones. Students gain familiarity with technologies they need to master to compete in a 21st-century global economy. Teachers get help improving the quality of their lessons through engaging educational software and digital classroom management tools. Those are important outcomes in any school system. In developing nations they’re especially impactful, as they help to bridge the digital divide between local students and their peers in developed economies.

But the socioeconomic benefits of smart classrooms to a community can also go well beyond the sphere of education. In the Kenya deployment, for example, Elitegroup worked with their technology partners and other stakeholders to set up a local hardware assembly plant as well as a service call center at Kenya’s Moi University.

“Smart classroom technology offers opportunities for truly transformational partnerships,” says Lin. “Our goal in Kenya was not only to work closely with teachers and students on technology training—but also to increase business and job opportunities in the local economy.”

Smart Classroom Solutions and the Future of Education

In the future, expect more school systems to take advantage of opportunities provided by all-in-one smart classroom kits—and not just in emerging markets. The same benefits that make this technology such a game changer in remote and underserved regions is equally attractive in San Francisco, London, and Tokyo. Cost-effective, easy-to-manage smart education capabilities—bolstered by a rich ecosystem of educational software—should grab the attention of teachers and administrators concerned with budgets, student engagement, and educator empowerment.

And in the longer term, smart classrooms will become even more potent as they leverage technological advancements to deliver better educational experiences. “We’re already thinking about how to enhance our smart classroom solution by incorporating AI functionality and offering support for hybrid learning,” says Lin. “And the proliferation of AR/VR technology and 5G connectivity will drive new smart learning features in the future.”

For schools, parents, and students, that’s an A+ technology trend, because smart classrooms will help make quality education affordable and accessible for all.

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

The Future of Video Analytics Is Virtualized

Cameras are all over modern life—and we’re not just talking selfies. They’re ubiquitous in security, manufacturing, traffic regulation. And with lots of cameras come lots of data: data to be collected, data to be accessed, data to be analyzed as it becomes more and more important for businesses to be able to extract its value. But after it’s been collected and before it’s been analyzed, it has to be stored. And that, it turns out, can be a real problem.

Virtualization might just be the solution to that problem, and systems integrators are rapidly figuring that out, says Darren Giacomini, Director of Business Development at BCD, a video storage solution provider. Here he discusses this transition to virtualization, and the benefits it can bring to both businesses and the general public—from a foil to cybercrime to an empty parking space (Video 1).

Video 1. BCD’s Darren Giacomini discusses why businesses should start making the switch to virtualization. (Source: insight.tech)

What are some trends in the way we benefit from video camera systems?

Video camera systems and analytics are becoming incredibly powerful. Cameras can pull a lot of analytical data and metadata in to be analyzed, and there are a lot of applications we’re seeing for cameras specifically in IoT devices that reside at the edge of networks. People are building in analytics to do things like count objects in manufacturing, but you’re also starting to use cameras to set up IoT devices at the edge in smart cities that can, for example, look at parking spaces and parking lots and determine what’s available. You can now open your smartphone and check for a spot there, instead of endlessly driving around looking for a place to park.

What challenges do businesses face with their camera systems?

You’re seeing a trend where people are starting to expand their retention periods—how long you have to keep the video at full-frame rate. In some correctional facilities they want to keep it for two years for its evidentiary value.

When you start talking about holding high-quality video for that time frame, you’re talking about an enormous amount of storage—petabytes and petabytes of storage. When you look at smart cities that may have thousands of cameras throughout their environs, storing all the data from all those cameras all the time can become not only incredibly expensive but difficult to maintain properly.

So a lot of what you’re seeing in the movement to 5G networking and IoT devices at the edge is about people trying to push the decision-making out to the edge in order to determine what’s important for video and what’s not. A little-known fact is that, in most cases, maybe only 5% to 10% of the video that’s recorded is ever used. The rest of it is just being stored.

For instance, you can run a search over a two-year period of data that says: I want all white trucks that went this direction, during this time frame, on any day. And you can pull that video back and see it, based on an analytical analysis. But the idea of doing that at the edge for 5G is that, if you can determine what’s important and what’s not important, then you don’t have to store everything else. I think analytics is going to play a huge, huge part in trying to scale back the massive amount of data and resources that we’re currently seeing.

“It’s really about utilizing your resources more efficiently. Also, when you talk about #virtualization you’re talking about the ability to create recovery points and snapshots.” – Darren Giacomini, @BCDVideo via @insightdottech

And I think the whole approach is going to change. Today the ideas is: Keep everything for two years. And over the years we’ve seen that people have changed the rules a little bit. Everything has to be kept at 30 or 60 frames per second for maybe six months. And then it drops down to 15 frames. But what we can’t do is drop it below the threshold needed for evidentiary value in municipal courts, so you can actually identify what you’re looking at.

Why is data storage such a problem for businesses in the video space?

Standard production servers more often than not cater to the IT environment. In an IT world you have a lot of data that’s stored in a repository or data center, and the data’s going outward to a few selected individuals who request it at a given time.

But with physical security, for example, you have hundreds and thousands of cameras and IoT devices simultaneously bringing data inward. And what we do at BCD is specialize in redesigning and re-implementing those particular devices to make sure that they’re optimized for that particular type of application.

What’s the role of virtualization in easing some of this storage congestion?

It’s the utilization of resources. In a typical physical security environment, you’re going to have cameras that reside at the edge, or IoT devices or sensors that are bringing data back. You’re going to have a network infrastructure—whether it’s wireless or a hardwired network infrastructure—that’s going to bring it all back to a centralized or decentralized point of recording where it’s stored. In some cases you may also have second-tier storage behind that.

And then you have people who are actually watching the cameras, and who need to see a particular event. You’ve got a car accident; you need to be able to pull up the appropriate video in real time and actually see what’s going on. That requires taking the data and either bringing it directly from the camera or redirecting it through the server out to the workstation. All of that utilizes resources.

But the most important segment in there is where the video is stored. Servers have finite resources: You have CPU, you have memory, you have network resources. When you’re doing a bare-metal server approach and you’re not virtualizing, you may be leaving unutilized 40% or 45% of the CPU cycles in cores that are allocated to that server. And it has nothing to do with the server’s capability in itself; it may have to do with the fact that you’re running on a Windows 2019 server, or whatever, and you can only load one instance of that software application.

So virtualization allows you to add in an abstraction layer—like VMware, ESXI, Hyper-V, or Nutanix—to the bare-metal server as an archiver. You virtualize maybe the directory servers or the access control into a flat-file structure that can be stored at a common share point. Then you have the ability to create more than one instance of the application on that machine. So instead of running just one instance of Windows 2019 on a server, maybe you run five, and you divide the resources up. Then you can take that CPU in memory that wouldn’t traditionally be utilized and get more production out of what you bought.

Naturally you’d think—for a company like BCD, where we sell high-performance servers—that that would be something we wouldn’t want to happen, but it’s going to happen regardless. Virtualization has been in the IT field for a very long time; there’s nothing you can do about it. You have to embrace the fact that people are wanting to do more with less.

How can businesses move to virtualization successfully?

It’s very similar to what’s already happened with the shift from analog to digital or IP. I think there are going to be a lot of the same growing pains, where people who have a particular skill set that they’re used to today are going to have to modify their approach. But if you don’t modify your approach, where you’re really going to feel it is in your pocketbook.

Because the fact of the matter is, if you’re quoting eight servers and I can do the same thing with three, I’m going to outbid you—even if we’re close, or even if I’m a little bit more. When I go back to the people who are making the decisions on the financial side and tell them, “Take a look at the total cost of ownership of running eight servers versus three. This is going to be more efficient; this is going to take up less real estate in your data center; this is going to take less power to keep it cool.” All these things come into play.

And I think there’s going to be a little bit of a struggle, but I don’t think it’s going to be as big as with analog to digital or IP. Everybody’s used to working with Windows now; everybody’s used to working with servers. It’s just that next step of learning that, instead of plugging into the front of a server, you’re going to get into a webpage that’s going to represent that server, and that’s the virtual machine that you’re going to work off of. It will be an educational experience, and people are probably going to have to send their employees out for some training to come up to speed on it.

Do you have any use cases you can share?

There’s a city on the East Coast where the entire municipality is running off of our Revolv platform, a hybrid hyperconverged approach based on virtualization that provides high availability. It literally cut down on getting people up at 2:00 or 3:00 in the morning to deal with some server that had lost power supply, because it’s going to do an automated recovery-feature set for you. And we gave them a much, much smaller footprint—4 or 5 servers to run everything in a virtualized environment versus 20 or 25.

What role does Intel technology play in making that happen?

One of the things that sets BCD up as a differentiator is that we don’t just build servers; we really do holistic, end-to-end solution sets. And that also plays a part in this virtualization that we’re all headed towards. That’s where the partnership with Intel comes in. Intel has been really good about providing resources, like these 100 gig NIC cards—which are not cheap—and other resources, that we need to do analytics and things of that nature to help push the envelope.

Beyond the physical aspects, where do you see virtualization going?

It’s really about utilizing your resources more efficiently. Also, when you talk about virtualization you’re talking about the ability to create recovery points and snapshots. We partner with another company called Tiger Technology that is, in my opinion, absolutely outstanding at looking at next-generation hybrid-cloud storage. That means that you’re going to have an on-prem presence with the ability to get hooks into the NTFS or the actual file structure inside of Windows and make it an extension of the platform. So at any given time you can take backups, or you can take multiple instances of backups, and push those out to the cloud for recovery. You really can’t do that type of thing in a bare-metal environment.

If you can take a snapshot and you can create a repository of snapshots, when something goes wrong then. . . Because what is your disaster-recovery plan? What is your business-continuity plan if you get hit? And the fact of the matter is that everybody is going to get hit. I don’t care how careful you are, there are zero-day instances out there that are going to hit you at some point.

Mostly what you’re seeing today is ransomware. So when you’re in a virtualized environment and taking regular snapshots you can actually say, “This is an acceptable loss. Roll the snapshot back one week. Let’s take the one-week loss rather than paying.”

Is there anything else about this topic you think we should know?

It just really comes down to the fact that if you’re an integrator out there and you deal in the physical security market you can ill afford to ignore the fact that virtualization is coming. I predict that in the next three to five years the large majority of the market is going to hit mainstream virtualization. And you either need to get on board with that, or you’re going to find yourself in a situation where it’s going to be very, very difficult to be competitive in the market.

Related Content

To learn more about virtualization within the video analytics space, listen to The Impact of Virtualization on Video Analytics: With BCD.For the latest innovations from BCD, follow them on Twitter at @BCDvideo and on LinkedIn.

 

This article was edited by Erin Noble, copy editor.