Giving the Green Light to AI Video Analytics

While a few cities in the world are beginning to share their streets with driverless cars, the rest of us are stuck in traffic with all the other humans behind the wheel. And these drivers may or may not pay attention to the rules of the road or look out for pedestrians and bicycles.

Fortunately, smart cities are taking important steps to improve these matters (while we wait to be bumped into the backseat), and AI and video-management solutions play a big role in that transition. The data that cameras collect provides a crucial window into driver and non-driver behavior, and the analysis of that data can lead to powerful insights and solving real-world problems—like snarled traffic and tragic accidents.

But there are many applications for AI video analytics in contexts beyond traffic jams—from personal protective equipment detection to retail—as Srivikraman Murahari, Vice President of Products and Strategic Alliances at Videonetics, a video management, video analytics, and traffic management provider, explains (Video 1). He also addresses using partnerships to create end-to-end solutions, the privacy and security concerns around collecting all this data, and the power of AI video analytics to affect our everyday lives.

Video 1. Srivikraman Murahari from Videonetics discusses how AI video analytics empowers communities within smart cities. (Source: insight.tech)

What are the challenges that AI video analytics can help address in urban planning?

One of the major challenges that government officials and city planners see is in citizen noncompliance with traffic rules, and this can cause accidents and even fatalities. So that puts pressure on government officials to streamline the traffic situation and smooth the traffic flow.

There are now 100-plus smart cities where our Videonetics intelligent traffic-management solution is deployed, and it has very powerful analytics capabilities with traffic. We also provide a smart visualization for the government officials, which gives a lot of insights for taking further action. I can confidently say that the traffic flow in those 100-plus smart cities is smoother and more streamlined now, and more awareness is created among the citizens to adhere to the traffic rules.

What are the challenges to implementing AI video analytics in smart cities?

One challenge is field of view—with cameras, the field of view is restricted. We are exploring methods such as sending drones to difficult places to capture the video. So we are looking at many innovative ways for the cameras to reach difficult areas.

How do you implement this technology while balancing citizen privacy?

That’s a good question. I would say that we have to enforce responsible and collaborative AI. And when I say collaborative AI, I mean that the government officials, the independent software vendors like us, and the citizens should all know what is happening, should know how the data is getting used. There should be a very transparent data policy. The second thing I would say is to use minimized, anonymized data. That means not storing so much data, and then the data that is stored should be anonymized.

At Videonetics we have very, very strict security standards. For us everything is objects, and we don’t have any people data. We abide by international security compliances, and we have very strict standards in our protocol and the way we handle the data. We are transparent and ensure data safety and compliance with those international standards. That’s how we handle it, and I think these are my suggestions.

“Adopt #data and #technology—including responsible and collaborative technology, and responsible and collaborative #AI” – Srivikraman Murahari, @videonetics via @insightdottech

Can you provide some examples of deploying AI video analytics in a smart city?

As I mentioned, we’ve deployed our platform in 100-plus smart cities, and it has helped to smooth and streamline the traffic and to ensure the safety of citizens. For smart city we are the number one in India. I can talk about a case study in one of the premier cities in India.

In that city there are about 400 cameras monitoring the traffic and another 700 cameras in pipeline—so I’m talking about 1,100 cameras monitoring the city’s traffic, ensuring lane discipline and one-way traveling, etc. It has eased the operations of the administrators in smoothing traffic flow.

As far as implementation is concerned, we have a collaboration with all the leading camera vendors around the world. For each project we decide on the most suitable camera along with the systems integrator and the partner who is involved in that project. The analytics then happen on the edge. For edge we use the Intel platform extensively—the Intel® Core i5, i7, i9 series as well as the latest generation chipsets, 11-13. And then in certain scenarios we have cloud for the storage.

Coming to the question of how to do it efficiently, our R&D is continuously putting in effort on that; we have a dedicated effort into how to optimize the compute. And I can say we have traveled a long way on that from the time we started. Now we have, let’s say, 20x or 30x improved computing efficiency. We are looking at how to use many fewer frames of the video to deduct an event, instead of processing the entire video. We are looking at collaboration with partners and using their latest technologies, platforms, and solutions to optimize performance and computing powers.

What are the benefits of partnering with other companies like Intel?

The partnership with Intel has been very great, very exciting because we are focusing more on and traveling more in the direction of analytics on the edge. And that’s a direction that Intel is also promoting—more analytics on the edge, more analytics by CPU. And so Intel is our best, our top partner in this direction, a direction that matches both the organizations.

Secondly, we have used Intel’s OpenVINO platform—the OpenVINO deep-planning platform. That enhances models using techniques such as post-training optimization and neural-network compression. These things reduce the total TCO for the customer because the computing power is enhanced. Another very great thing to mention about Intel is the Intel® DevCloud platform, which is always available for us to benchmark our latest models. As we speak, our models are getting benchmarked in the 11th to 13th generation series of Intel chipsets.

And I’m very happy to announce that we won the 2023 Intel Outstanding Growth ISV Partner Award for outshining the competitors, as well as enabling Intel to onboard more partners. So it has been a very long and successful journey with Intel for us.

What other use cases for AI video analytics can we look forward to?

Outside of smart cities, we are into quite a number of verticals. The biggest space there is aviation and airport security, where we are helping more than 80 airports in analytics, such as being able to quickly detect smoke and fire. And then there are enterprises such as oil and gas and thermal—and smoke and fire are pretty dangerous there, too. These kinds of video-analytics applications have been quite a hit and create a lot of value for these enterprises.

We have our own deep-learning platform, called Deeper Look, with about 100 video-analytics applications developed out of it. They cover a wide range of analytics, including crowds, vehicles, mass transport, women’s safety, and retail. In retail we do a heat map that gives owners insights to help them understand selling patterns in their stores. In the case of mass transport, most of the railway in India is using Deeper Look. Another very widely employed use case is PPE detection, which helps with the safety of workers. There is also banking and finance. One other interesting area we support is forensic research, which is very useful for investigation.

Any final thoughts or key takeaways?

My key takeaway would be to adopt data and technology—including responsible and collaborative technology, and responsible and collaborative AI—to increase the vigilance of governance, to increase the operational efficiency of enterprises, to enhance the safety of people, and to go beyond security.

Regarding the computing, we have to continuously invest in optimizing the computing power; we have to be open in our API; and we have to show a lot of openness so that our platforms are easily interoperable with third-party vendors. That is also quite important.

And finally, I repeat: Ensure responsible and collaborative AI, and take the administrators and citizens into confidence. Video and IoT are an excellent combination, and there can be lots of use cases that will enrich the quality of human lives.

Related Content

To learn more AI video analytics in smart cities, listen to AI Video Analytics Empower Communities: With Videonetics. For the latest innovations from Videonetics, follow it on Twitter and LinkedIn.

 

This article was edited by Erin Noble, copy editor.

Partnerships Solve Medical Device Manufacturing Challenges

A medical device revolution is currently in progress, driven by a globally aging population, a surge in chronic illnesses, and an escalating need for diagnostic medical solutions. As a result, solution providers are eager to satisfy this new demand and actively seek avenues to gain a competitive advantage and accelerate development.

This fervent pursuit of progress has led medical device manufacturers to continuously seek superior hardware components. A clear example of this trend is the heightened emphasis on implementing enhanced embedded storage options. To illustrate, many manufacturers are shifting away from outdated hard disk drives (HDD) and consumer-grade removable media cards, pivoting instead toward solid-state drives (SSDs), particularly the compact form factor of single-chip SSDs.

“SSDs offer significantly swifter and more reliable storage capabilities than HDDs or SD and CF cards,” explains Jason Chien, Director of Embedded Product Marketing at Silicon Motion, a developer of NAND flash controllers for SSDs and other solid-state storage devices. “And in medical device manufacturing, single-chip SSDs are preferred due to their compact form factor.”

But the road to sourcing optimal components for medical devices is riddled with substantial challenges. Given the high stakes in the medical sector, stringent criteria for hardware reliability, performance, and data security are paramount. The operational environments can also be harsh, often necessitating customized configurations. Consequently, identifying a fitting SSD solution is rarely as simple as ordering a generic product from a catalog.

In light of these complexities, hardware experts collaborate closely with medical device manufacturers to deliver tailor-made SSD solutions that meet the medical sector’s demands. These strategic partnerships not only expedite time-to-market for advanced medical equipment but also curtail expenses and conquer the most formidable technical hurdles. Importantly, they lay the groundwork for the imminent influx of medical AI applications.

These strategic #partnerships not only expedite time-to-market for advanced #medical equipment but also curtail expenses and conquer the most formidable #technical hurdles. Silicon Motion Inc. via @insightdottech

Delivering SSDs for Advanced Medical Equipment

To appreciate the significance of these collaborations, it’s vital to acknowledge the limitations inherent in off-the-shelf SSDs. One primary catalyst for SSD adoption across industries has been the increasing affordability of NAND flash memory—the vital non-volatile storage component within modern SSDs. This affordability shift enabled numerous sectors to transition from HDDs to SSDs.

Nonetheless, a concern arises as NAND flash providers strive for higher memory cell density to reduce costs, leading to diminished quality and durability of the NAND in SSDs. While this might not pose a substantial issue for consumers or certain industrial applications, it’s a genuine concern within the medical context.

In response, SSD specialists like Silicon Motion have developed innovative solutions, such as the FerriSSD series of single-chip embedded storage. This series incorporates proprietary technologies that vigilantly monitor SSD NAND flash component health, taking corrective measures as necessary. Thus, the operational life of an SSD can extend significantly beyond that of the NAND component, fulfilling the data integrity requisites of medical device manufacturers.

Beyond safeguarding data integrity, Silicon Motion’s SSD lineup encompasses crucial cybersecurity and data privacy features, in today’s high-risk threat landscape. Full-disk encryption ensures data confidentiality, adhering to both the TCG Opal 2.0 and AES 256-bit encryption standards. Moreover, digital signature technology fortifies against cyberattacks targeting the SSD’s firmware, ensuring malicious actors can’t tamper with or compromise the firmware.

“Consumer-grade SSDs suffice for specific scenarios,” says Chien. “But medical devices, precision manufacturing, and applications demanding elevated performance, security, and stability necessitate a more sophisticated solution.” (Video 1)

Video 1. Single-chip SSDs are an attractive choice for medical device manufacturing and other industries where reliability is paramount. (Source: Silicon Motion)

Medical Device Manufacturers Overcome Complex Challenges

Medical device manufacturers realize substantial advantages by employing SSD solutions meticulously designed for their unique needs. Equally noteworthy are their collaborations with hardware specialists, which empower them to surmount even the most intricate technical challenges.

For instance, consider Silicon Motion’s experience partnering with a manufacturer of advanced medical equipment emitting electromagnetic radiation (EMR). EMR is prevalent in medical settings due to procedures like CT scans and MRI machines. But the electromagnetic interference (EMI) emanating from this manufacturer’s equipment induced a high frequency of soft errors, jeopardizing the stability of the memory cells within microchips.

Silicon Motion’s engineers ingeniously devised customized hardware and firmware impervious to EMI and equipped to swiftly recover from soft errors. The outcomes were striking—reducing the manufacturer’s soft-error rate by a remarkable 96%, ensuring the essential medical equipment remains operational when needed most.

Partnerships between hardware experts and medical device manufacturers yield the remarkable capability to engineer bespoke solutions, a feat that Chien emphasizes as being pivotal: “We possess the ability to tailor our hardware and firmware to meet unique customer requirements, whether it involves enhancing reliability or accommodating design constraints.”

This customization is bolstered, in part, by Silicon Motion’s technology collaboration with Intel. Chien notes, “All our products are developed based on the Intel platform. It’s widely adopted in advanced medical equipment, minimizing compatibility issues while providing exceptional stability and robustness.”

The Bright Future of Medical Device Manufacturing

Collaboration between medical device manufacturers and hardware specialists already delivers tremendous value. In the years to come, manufacturers stand to gain even more advantages from these symbiotic partnerships.

Anticipating a future where advanced medical equipment continually evolves on-site, Silicon Motion has proactively equipped its SSDs and firmware to support Tesla-style over-the-air (OTA) updates. In addition, the company is preparing for the rise of medical AI and IoT.

Furthermore, Silicon Motion is gearing up for the ascendance of medical AI in the Internet of Things. Chien asserts, “We are actively exploring methods to customize our hardware and firmware to optimally accommodate AI applications. As technologies evolve, so do the demands of medical devices. AI and IoT represent the future of medical applications, and we’re collaboratively shaping this future with our medical equipment partners.”

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Real-Time Automatic Transcriptions Keep Data at the Edge

With today’s hybrid workplace model, the amount of time spent in meetings has skyrocketed. The reality is that employees and coworkers are in all different locations, which requires more meetings to ensure everyone is on the same page and getting the job done. But while this solves one problem, it also comes with many unintended consequences.

For instance, meeting participants now have to spend more time focusing and taking notes during the meeting—which can be mundane—instead of spending their time on valuable parts of their day-to-day job. And while there are automatic transcription solutions available, this comes with its own set of data privacy, connectivity, and time challenges.

That’s why integrated technology and services provider Cedat85 set out to create a cutting-edge solution that encompasses all the requirements of a contemporary AI-driven solution, but with the maximum security and offline capabilities businesses require today. CABOLO One was developed as a speech-to-text device that offers a range of services, including real-time recording, automated transcription, translation, archiving, and an indexing edge solution.

Automatic Transcription for Confidential Meetings

The idea for CABOLO One emerged after receiving vital feedback from Cedat85’s global customer base. End users expressed a need for enhanced privacy and security while transcribing and translating sensitive meetings—all while operating offline, aligning with the stipulations of the European Union’s General Data Protection Regulation (GDPR).

In response to these imperative considerations, Cedat85’s engineering team meticulously crafted the CABOLO Suite.

The solution leverages the edge to ensure only owners of the data can access it. Meeting participants, whether in the same room or remotely, connect to CABOLO One via a WiFi hotspot. Users can use their PC, mobile, or tablet, and then connect the device’s WiFi hotspot with a security password.

Keeping data at the edge also ensures no cloud provider, or even Cedat85, can get at it. Automatic transcription downloads are encrypted with AES-256, the strongest encryption standard available commercially.

But while the need for confidentiality drove the solution, Cedat85 also saw an opportunity to provide advanced capabilities for inclusion and accessibility. Giving proper access to all users became a priority during the COVID-19 pandemic, when working from home and remote learning received a major boost. Remote connections for work meetings and employee collaborations became the norm, but not everyone could participate.

“When we had the first lockdown in Italy, one of our customers who’d been using our solutions for many years said, ‘We cannot support our hearing-impaired employees. We cannot send a translator or a sign interpreter to their house,’” says Selena Gray, Marketing Director at Cedat85. “Those employees had been left out of meetings, but with this technology, they were able to attend all of their meetings as usual.”

Since CABOLO operates at the edge, it eliminates latency challenges caused by poor internet connections. And because it’s automated, it saves organizations the time and expense of transcribing minutes from meetings.

“This is like a meeting assistant. You just run it in the background, it takes notes word by word, indexes and creates an archive for you while being offline,” says Gray.

While the need for #confidentiality drove the solution, Cedat85 also saw an opportunity to provide advanced capabilities for #inclusion and #accessibility. Giving proper access to all users became a priority. Cedat85 via @insightdottech

Digital Enablement for Various Sectors

CABOLO One’s roster of clients has grown steadily as organizations in various sectors leverage digital enablement as part of its business strategy. It’s in use at banking and finance multinationals, pharmaceutical companies, Italy’s Chamber of Deputies, and several universities.

Beyond recording and transcription services, Cedat85 also ensures CABOLO can transcribe text in more than 30 languages, and translate up to 60 languages.

The use of the core technology can be found in the European Parliament, where the organization uses CABOLO One for real-time transcription and translation across the 24 languages spoken within the EU.

The technology is especially beneficial to students with hearing impairments and those who speak a different first language. In a university, it can send subtitles to a screen as students follow a lecture in real time, whether they are in the classroom or connected remotely. Translation to a student’s first language is also available. In countries such as the U.K., where foreign students account for sizable portions of campus populations, translation capabilities are especially welcome.

“The moment someone is saying any word, it will be already in front of you, transcribed, subtitled, and in some cases, translated,” says Gray.

Bocconi University in Milan deployed the device for inclusion and accessibility by subtitling university-wide events. Initially, the university used the tool only in Italian, but subsequently asked for other languages. They ended up adopting an all-in-one system that allows immediate subtitling in multiple languages.

It can also be used in research facilities like pharmaceutical companies and universities. CABOLO One keeps a record of discussions that participants can reference later to follow up on action items. “They can record all of their sessions, whether it’s brainstorming, talking about technology, or talking about how to develop something together uniquely. They can have every detail that they’ve discussed, and can go back to it or even search for certain terms and words within the session,” Gray says.

Finding the Right Partner to Bring Data to the Edge

From the inception of the CABOLO One project, it was evident that a partner of global reach, reliability, innovation, and collaboration was imperative for the successful deployment of the technology. After an extensive and meticulous evaluation process, the engineering team pinpointed Intel as the ideal collaborator that embodied all these essential attributes. Recognizing Intel as the pivotal partner in this endeavour, the stage was set to bring this vision to fruition and deliver the utmost in technological excellence to customers.

Thanks to Intel technology, the company has been able to offer the solution as a standalone edge device that can keep growing in functionality. Cedat85 has already developed a video version that includes transcription with subtitles, translation, and keyword searchability. And with the use of the Intel® Distribution of OpenVINO Toolkit, Cedat85 can ensure its speech-to-text device is performant, efficient, and accurate.

“Being a partner with Intel means we have a platform where we can continue to build different solutions, approach different needs, and evolve our research and development activities,” says Gray.

The company plans to expand on CABOLO One’s advanced capabilities with AI-driven automated creation of bullet points to summarize events and a synthetic voice that can support meetings in real time.

“This is a technology to help everyone. This device is not going to make anyone redundant at their work, but it’s just a device that can help and support everyone as much as it can,” says Gray.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

AI Video Analytics Empower Communities: With Videonetics

There is a common misconception that AI will become an intrusive part of our everyday lives—but it’s actually quite the opposite. The reality is that AI has the potential to enhance everyday life in many ways that we may not even notice. For example, AI video analytics can be used in smart cities to monitor traffic flow, identify hazards, and help emergency responders quickly respond to incidents.

Of course, there are still concerns around security and privacy. But many companies are committed to implementing AI in a way that prioritizes user security.

In this podcast, we talk about the opportunities AI video analytics provide communities, real-world use of AI video analytics in smart cities, and how to successfully deploy these systems in a safe and secure way.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guest: Videonetics

Our guest this episode is Srivikraman Murahari, Vice President of Products and Strategic Alliances at Videonetics, a video management, video analytics, and traffic management provider. In his current role, Sirvikraman leads the company’s product strategy and roadmap as well as manages alliances with technology and ecosystem partners around the world. Prior to joining Videonetics, Sirvikraman worked at Huawei for 20 years in various roles such as Head of Consumer Software, Associate Vice President, and Senior Product Manager.

Podcast Topics

Srivikraman answers our questions about:

  • (3:06) Challenges AI video analytics address in smart cities
  • (5:40) Implementing AI solutions that balance citizen privacy and well-being
  • (7:29) Developing and deploying solutions citywide
  • (9:09) Technology infrastructure that goes into successful deployments
  • (11:38) Creating end-to-end solutions with ecosystem partners
  • (14:25) Additional opportunities and use cases for AI video analytics

Related Content

To learn more about what Videonetics is doing in the AI video analytics space, follow them on Twitter and LinkedIn.

Transcript

Christina Cardoza: Hello, and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re talking about the power of AI video analytics in our everyday lives. Joining us today we have Vikram Murahari from Videonetics. So, before we jump into the conversation let’s get to know our guest more. Vikram, welcome to the IoT Chat. What can you tell us about yourself and Videonetics?

Vikram Murahari: Thank you, thank you, Christina. I’m the Vice President of Products and Strategic Alliances in Videonetics. I also handle the standards body. So, Videonetics is primarily—you know, we are into developing a video-computing platform, primarily focusing on video-management solution. Video analytics is one of our key areas, and we have about 100-plus smart cities running our intelligent traffic-management system, and about 80-plus airports and more than 100-plus enterprises running our video-analytics solution. In fact, about 200K cameras are monitoring through our platform, and also about 20K lanes, I would say.

One of the main, key USPs—we have our own deep-learning platform called Deeper Look, so we have about 100 video-analytics applications developed out of this platform. The analytics applications cover a wide range of analytics, which includes people, crowd, vehicle, transport, women safety, retail, and we cover a lot of verticals, which includes smart city, enterprises, national highway, and retail, finance, and so on. And for smart city we are the number one in India, and also we are compliant to ONVIF standards—that is the open-network video interface standards. So that’s about a short introduction of myself and Videonetics.

Christina Cardoza: Yeah, that sounds great. And that’s exactly what we want to get into today, this idea of smart cities. AI and video analytics, like you mentioned—technology’s only getting smarter, and now they’re being applied in ways that we never would’ve thought of, and sometimes in ways that we don’t even notice, especially in the context of a smart city.

Citizen life is changing every day and I know there’s a lot of efforts to make citizen life improve the communities and improve everyday life with these technologies. So can you talk a little bit about what challenges that government officials or city planners may have seen that are attributed to the use of AI video analytics in a city context?

Vikram Murahari: See, the major challenges the government officials see are in the non-compliance of citizens in traffic rules, okay? So that puts a lot of stress and pressure on the government officials in streamlining the traffic situation and smoothening the traffic flow. It also causes a lot of fatalities and accidents and things like that, you know? So creating issues for the safety of citizens.

So our Videonetics applications, as I said, it’s 100-plus smart cities where our intelligent traffic-management solution is deployed; it has very powerful analytics capabilities on traffic. And then throughout all this we provide a smart visualization for the government officials, which gives a lot of insights for taking further action. So by this experience of 100-plus smart cities deploying this analytics application, I can confidently say that the traffic flow is smoothened and streamlined and, see, more awareness is created among the citizens to adhere to the traffic rules. So this has been our experience, Christina.

Christina Cardoza: Yeah, absolutely. And I think everybody can agree: no one likes to sit in traffic. And so I’ve seen these video analytics just help city planners also be able to install traffic lights in different places or to help with their plans and how to structure roads and everything. Then make it smoother to get where you’re trying to go to ease up some of that congestion that we’re talking about. Also I’ve seen AI being able to detect when people are crossing the roads, alerting vehicles or alerting pedestrians that it’s not safe to cross yet—really, just safeguarding and protecting the wellbeing of citizens all around.

One thing I’m curious about—because sometimes we’re collecting this information and we’re collecting this data without even knowing that it’s happening—I’m wondering, because I know a big concern for the citizens on the side of this is that they may have concerns with their privacy and how that data is being used. So, how can we implement these types of solutions that really focus on the wellbeing of citizens, but also balancing their right to privacy?

Vikram Murahari: Yeah, that’s a good question. I would say we have to enforce responsible and collaborative AI. So when I say “collaborative AI,” the government officials, the independent software vendors like us, and the citizens should know what is happening, should know how the data is getting used. We should have a very transparent data policy. And then the second thing I would say is use minimized, anonymized data. That means, don’t store so much data, undelivered data, and then the data should be anonymized. Everything is objects; we don’t have any people data with us.

So, moving on, we have very, very strict security standards. That means, comply to the international security compliances and have very strict security standards in our protocol and the way how we handle the data. And be transparent and then ensure the data safety and compliance to the international standards. I think these are my suggestions, and that’s how we handle it.

Christina Cardoza: Yeah, absolutely. That sounds like a good setup and best practices when dealing with privacy and using these AI solutions. You mentioned in your introduction that you guys have been deployed and helped a number of different smart cities. I’m wondering if you can expand a little bit on those use cases—what your experience has been developing these solutions citywide and what the results have been, if you have any specific examples you can give us how you guys came in and utilized either existing infrastructure or implemented new infrastructure and technology?

Vikram Murahari: Yeah, sure. So, as I have already informed, so about 100-plus smart cities, we have deployed our platform and it has helped to smoothen the traffic, streamline the traffic, and ensure the safety of the citizens. I can talk about a case study in one of the premier cities in India.

So, there are about 400 cameras monitoring the traffic and another 700 cameras in pipeline. So I’m talking about 1,100 cameras monitoring the city traffic, ensuring the lane discipline, wrong-way, one-way traveling. It has eased the operations of the administrators in smooth traffic flow. And from technology perspective, more and more cities are adopting cloud for computing as well as storage. So we do have several projects running where the compute is in the cloud, and the storage is in cloud.

Christina Cardoza: Wow, that’s great. You said in one smart city you’re working with about 400 different cameras. So I’m sure there’s a lot of data and analytics coming in which can be overwhelming for the officials that are planners, that are looking at all of this. I’m curious, how were those cameras implemented?

Were these cameras that smart cities already had all throughout their cities that they’re able now to add intelligence on top of it to get this data? And what sort of AI algorithms or other technology are you guys using to be able to gather all of that data, make sure the performance is still high quality, and that you’re getting information in real time where it matters most, and getting the right information so you’re not also getting a whole bunch of false positives of things happening? You know exactly what the problem is and how to address it.

Vikram Murahari: So, as far as the camera is concerned, we have a collaboration with all the leading camera vendors around the world. And then for each project Videonetics—along with the system integrator, the partner who is involved in the project—decide the best suitable camera. And then the analytics happen on the edge. For edge we extensively use Intel platform—the Intel® Core i5, i7, i9 series; and the latest generation chipsets, 11 to 13. So we extensively use Intel platform on the edge, and then in certain scenarios we have cloud for the storage.

So, coming to your question of how to do it efficiently, we are looking at—continuously our R&D is putting in effort. We have dedicated, continuous effort on how to optimize the compute. We have traveled a long way, I can say, from the time we started. Now we are, let’s say, 20x, 30x in improved computing efficiency. So we are looking at how to use very less frames of the video to deduct the event instead of processing the entire video. We are looking at collaboration with the partners and using their latest technologies, platforms, solutions to optimize the performance and the computing powers. That’s how it goes.

Christina Cardoza: Yeah. That’s great to hear that you’re working with so many different partners. Every time I think about implementing these technologies, especially on a large scale, like a city, I think a theme we always talk about is better together. You know, this is not something that one organization can do alone. Really leveraging the expertise and the technology from others and putting it together to create a whole end-to-end system. I should mention that the IoT Chat and insight.tech as a whole, we are sponsored and owned by Intel.

But I’m curious what additional benefits or values—how did you and Intel come together to create this partnership and really form this end-to-end solution that you can use within smart cities and some other use cases that I want to get into in a second?

Vikram Murahari: Yeah, see, talking about the partnership with Intel, it has been very great, very exciting, because we are focusing more on, or traveling more in the direction of analytics on the edge. So that’s the same direction Intel is also promoting—more analytics on the edge, more analytics by CPU. So that is the direction which we are traveling. And so there Intel is our best, top partner in this direction which matches both the organizations.

Secondly, we have used Intel’s OpenVINO platform, the OpenVINO deep-planning platform. So that can actually optimize the models using techniques such as post-training optimization and neural-network compression. So these things enhance and reduce the total TCO for the customer because the computing power is enhanced.

And one of the best thing to talk about Intel is their DevCloud platform which is always available for us to benchmark, contest our latest models. As we speak our models are getting benchmarked in 11th to 13th generation series of Intel chipsets. In fact, I’m very happy to announce that we won the Outstanding Growth ISV Partner Award from Intel for 2023 for enabling Intel to win more customers, outshining the competitors as well as enabling them to onboard more partners. So that has been a very long, successful journey with Intel for us.

Christina Cardoza: Wow, congratulations. And I’m sure that just helps the ecosystem as a whole, being able to work with more and different partners, and I can definitely understand why edge is such a big component here when we’re dealing with all these video cameras and data that you’re talking about. You know, the edge is really where you’re going to get that fast performance, low latency, and real-time data that you need with some of these.

One thing that interests me is we’re talking about traffic detection and detection of all of these things—the underlying AI capabilities that go into this, like object detection—and you mentioned you’re using OpenVINO to train some of these algorithms for a smart city use case. But they can be applied to, I think, a number of different use cases. Can you talk about what other use cases or AI advantages you see, maybe outside of smart cities, that we can look forward to?

Vikram Murahari: Yeah. Outside of smart cities we are into quite a good number of verticals. The first thing outside of smart city, the biggest space where we are in is aviation and airport securities. And then enterprises such as oil and gas, and thermal. So in these enterprises and airport securities we are helping more than 80-plus airports in analytics such as quickly detecting fire and smoke, which is—for all the three industries which I spoke about—the fire and smoke is pretty dangerous. And then some object moving in a certain area which it is not supposed to move, or a person falling.

So these kinds of video-analytics applications are quite a hit, and then creating a lot of value for the enterprises. Besides that we do support lot of industries, and one of the most widely used use cases is PPE detection for the workers, for their safety. And then in retail we have a heat map, which gives insights to the retail owners to help them understand their selling patterns. And then we are also into other areas such as mass transport, not just city—mass transport, mass railways. Most of the railway in the country are using our Videonetics deep-learning platform.

So there are quite a good number of—I mean, we’re into a lot of verticals, more, I can also say about banking, finance, and things like that. And another interesting area is we also support forensic search, which is very useful for investigation.

Christina Cardoza: Yeah, absolutely. It’s great to see all of these different ways that AI can be used in the smart city, but then also outside of the smart city.

I know we are almost close to the end of our time together, but before we go I just want to throw it back to you real quick, Vikram, if there are any final thoughts or key takeaways you want to leave our listeners with today—moving towards video analytics or moving towards the edge or how they can successfully do this and approach AI video analytics solutions.

Vikram Murahari: My key takeaways could be adopt data and technology, which includes responsible and collaborative technology, responsible and collaborative AI, and to increase the vigilance of governance, increase the operational efficiency of enterprises, enhance the safety of people, and go beyond security. Video and IoT are an excellent combination, where there can be lots of use cases which will enrich the quality of human lives.

In traveling this journey there could be some challenges such as, for example, field of view. In camera the field of view is restricted, but we are exploring innovative methods such as sending drones to difficult places to capture the video. And in intelligent traffic-management systems—in the city which I talked about—all the police have the body-worn cameras. So we are looking at innovative ways how to reach difficult areas.

And regarding the computing, how we should travel, I already explained, we have to continuously invest in optimizing the computing power, and we have to be open in our API, and we have to show a lot of openness so that our platforms would be easily interoperable with the other third-party vendors. So that is also quite important.

And finally, again, I repeat: ensure responsible and collaborative AI, take the administrators and citizens in confidence. I think these are my key takeaways.

Christina Cardoza: Yes, some great final thoughts. One thing that I love that you said was it’s not just about the technology, it’s about solving real-world problems. So it starts with the problems and the use cases that we need technology to solve. And then we have the technology there, and working with third-party operators, and making sure that you guys fit within this ecosystem I think is really important, so that when you are moving towards these types of solutions you’re not locked in, and you have lots of different choices and partners that you can rely on.

So, thank you for the conversation, Vikram. It’s been great talking to you, and I look forward to seeing what else Videonetics does in this space. And thank you to our listeners for joining us today. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

OEMs Expand Revenue Models with Low-Code Software

Milking existing capital assets for extra money is the business equivalent of finding quarters among the couch cushions. But using IIoT technologies to deliver revenue-generating services is one better because it offers a much more reliable income stream. And it’s what asset-based businesses of all sizes can access thanks to no-code or low-code IoT-specific software, says Steve VanderSanden, COO at Exosite, provider of IIoT solution components.

The off-the-shelf products that Exosite offer help businesses shorten their time to market and with minimal development effort. Exosite essentially enables what VanderSanden labels a “data pipeline.” “All that data flowing through improves business efficiencies, whether you’re using your own analytics, leveraging AI, or machine learning tools,” he says. “OEMs can now transition to more of a service delivery model, where they’re charging a recurring fee for equipment maintenance or granting access to data on an ongoing basis,” VanderSanden says. “It’s a recurring revenue stream instead of a one-time thing.”

Striking such a new gold vein is exactly what Fairbanks Morse Defense (FMD), a principal supplier of leading marine technologies, has accomplished thanks to IIoT data facilitated by the ExoSense Condition Monitoring Solution, a low-code software platform. By using Exosite products, FMD sells add-on services like access to machine data and augmented reality walkthroughs of equipment that clients can use as training modules.

Remote Monitoring and a Breadth of IIoT Solutions

The FMD method of harnessing remote monitoring data from its machines is one way of using IIoT technologies. In addition, it works for a range of industrial operations—from remote diagnostics to tracking environmental conditions and on to predictive maintenance. For IIoT tech to be of use, though, data gathered from operations needs to funnel into solutions that can digest the information and present the results for clients to easily visualize and act on.

The remote monitoring solution from Exosite comprises two discrete components:

  • Murano, Exosite’s IoT platform ingests data from application-specific sensors, gateways, PLCs, and many other data sources. It also attends to data storage integrations and hosting applications.
  • ExoSense is the software that powers the solution and has a user interface through which businesses can manage assets, devices, and the data that flows from them. A simple rules-based engine can act on specified conditions. “If the water level reaches a certain height and sends a text message to an on-call employee” is an example of a rules-based alert.

Enterprises can choose where to host ExoSense: a managed solution within Exosite’s cloud-based infrastructure: an on-premises model or a dedicated cloud. Data sovereignty rules. Security concerns, and the cost of data transport to and from the cloud, can all dictate where to host the solution, VanderSanden says: “We architected ExoSense in a way so that it’s hardware-agnostic and can be used with compute resources from any cloud services provider.”

ExoSense can add on functionality depending on end-user needs. A digital twin, for example, can represent complex assets and help enterprises calculate what-if scenarios for a range of processes while piping in IIoT data from multiple streams.

#EdgeComputing will likely see even bigger growth as the demand for #AI inference at the edge increases. @exosite via @insightdottech

From Basic Monitoring to New Revenue Streams

Whether companies choose to run with basic monitoring operations or layer on more complex applications, they still can realize operational efficiencies, VanderSanden says.

FMD not only found revenue streams by packaging and selling asset data, it was able to decrease the number of service calls because clients have access to machine behavior data and can act on problems proactively.

Similarly, cellular provider Telecom Argentina wanted to package and sell complete solutions—hardware, software, and connectivity—to key vertical markets. The company uses remote monitoring solutions from Exosite to develop these service-based products. One of the packages Telecom Argentina has built is targeted at independent farmers and focuses on monitoring of grain bins. “From an agricultural perspective, such sensing equipment and connectivity are key,” VanderSanden says. So Telecom Argentina delivers not just connectivity, but also uses IIoT monitoring solutions from Exosite to show its end users how they can harness that connectivity to their net benefit.

And “Intel’s across the board in our solutions,” VanderSanden says. ExoSense uses Intel® Xeon® processors to collect, process, and store data gathered from IIoT sensors. Intel processors also power on-premises servers, industrial PCs, and infrastructure in the cloud, VanderSanden points out.

Exosite leans on systems integration (SI) partners in mutually beneficial partnerships where the SI can bring in Exosite when they work with a client who needs an IIoT solution. Similarly, Exosite calls in solution aggregators like World Peace Industrial when their customers look for installation services or to identify the correct gateways and hardware solutions for various needs.

The Future of IIoT Technology

Expect an increasing demand for localized data processing and analytics with the growth of machine data, VanderSanden says. Transferring large volumes of data to the cloud and back is not only time-consuming but also expensive. Edge computing or on-premises deployments can solve these problems. Edge computing will likely see even bigger growth as the demand for AI inference at the edge increases.

Also, greater numbers of small to medium-size businesses will implement IIoT solutions in the future, VanderSanden predicts. “These organizations are not software companies and don’t have the internal expertise to build out a complete solution on their own. But they see value in being able to provide new service offerings for their customers and charging a premium. When there’s a barrier to entry for such organizations, companies like Exosite can provide services and solutions to help bridge that gap,” VanderSanden says. And even companies that do have a slightly more robust development team can focus on the company’s software and products to build vertical solutions rather than having to build from the ground up each time, which would be very expensive.

Companies can leverage advanced technologies to realize operational efficiencies and move from selling widgets to services, which is an important transition, VanderSanden points out. “It’s a digital business transformation that is enabled by our IIoT software.”

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Advance AI Data Security and Collaboration in Healthcare

Mary Beth Chalk considers herself lucky.

A stage-one breast cancer survivor, Chalk says she’s lucky that she was perfectly positioned on the mammogram machine and lucky because the radiologist was able to spot a pinhead-sized tumor lodged against the chest wall.

But she doesn’t want to leave such life-altering diagnoses to luck. Instead, Chalk is keen on using AI in healthcare to improve outcomes for all patients. It’s why she co-founded BeeKeeperAI, a startup that enables secure collaboration between algorithm developers and healthcare institutions. BeeKeeperAI resulted from Chalk’s previous work at the University of California, San Francisco (UCSF), where she focused on industry collaborations that required accessing and computing with real-world, protected health information (PHI).

At UCSF the roadblocks to AI development and implementation came into full view. There, Chalk noticed that innovation depended on collaborations between healthcare institutions and algorithm developers. But when even collaboration is possible, it takes an extremely long time because of worries over intellectual property (IP) and the privacy laws safeguarding PHI.

Such bottlenecks are unfortunate, Chalk says, because AI has tremendous potential for innovation in healthcare—algorithms detecting breast cancer at the earliest stages are only a fraction of what’s possible.

#AI has tremendous potential for innovation in #healthcare—algorithms detecting breast cancer at the earliest stages are only a fraction of what’s possible. @BeeKeeperAI via @insightdottech

Confidential Computing Ensures AI Data Security

Chalk, Co-founder & Chief Commercial Officer at BeeKeeperAI, co-launched the company to help reduce roadblocks to data access for AI development by leveraging confidential computing, a hardware-first approach to security.

BeeKeeperAI’s software with imbedded confidential computing provides a solution in which both the data and the intellectual property are fully protected at rest, in transit, and during computing. The operating principle behind confidential computing is the creation of a fully attested trusted execution environment (TEE). TEE isolates data and algorithm in the processor and memory, and uses hardware-based encryption keys to maintain Total Memory Encryption. Computing happens in these confidential environments, protecting both data and intellectual property.

Paving the Way to AI Collaboration

EscrowAI is BeeKeeperAI’s zero-trust collaboration platform. It alleviates both pain points the sector routinely faces—processing patient health data securely and preserving intellectual property. EscrowAI allows data holders and algorithm developers to work together with “push-button ease,” Chalk says. Another advantage of the platform is thorough documentation for audit compliance. “Every action that’s taken on the platform is recorded and archived for complete traceability,” Chalk adds.

Such proof of data security is vital to demonstrate compliance with jurisdictional privacy protection regulations and to collect evidence that supports market clearance regulatory filings for medical devices, digital therapies, and pharmaceuticals. Under the hood, the solution integrates policy and cryptographic key management from Fortanix.

Intel® Software Guard Extensions (Intel® SGX) is built directly into Intel® Xeon® Scalable processors and enables the creation of isolated TEEs called enclaves. “We’ve been an Intel SGX user from the very beginning because it ensures the protection of both the algorithms and the data at runtime. And that’s a competitive differentiator for us,” Chalk says. “The enclaves eliminate any access by the virtual machine operating system, or the VM administrator, or even BeeKeeperAI. So that prevents any outside interference.”

Chalk is grateful that Intel provided a grant for the company to conduct a proof of design while the team was still at UCSF. “Intel has been an early and great partner for us,” Chalk says.

Confidential Computing Use Cases

The healthcare industry is very familiar with roadblocks to AI implementations, so solutions have been knocked around for a while. For example, artificially produced synthetic data, which has the characteristics of real-world data without compromising information, has been touted as a workaround for privacy and security challenges.

But, says Chalk, synthetic data is wholly inadequate. For one thing, when you scramble patient data, you introduce noise that’s not consistent with real-world data, she points out. Besides, in critical applications “you want an algorithm that has been validated and tested on real-world data,” Chalk says. “We would not trust a cancer-detecting algorithm based mostly on synthetic data to do its job accurately.”

Chalk is not convinced that we’re going to see large-scale adoption of AI in healthcare without confidential computing. But with it, new avenues open, such as when BeeKeeperAI helped Novartis address challenges related to a rare pediatric disease. The healthcare company had developed an algorithm but needed to validate it on real-world data sets. In addition to the familiar privacy concerns, Novartis faced an additional problem: The data set was limited to only 27 wholly unique patients, so that any level of deidentification would destroy the ability to test the algorithm.

BeeKeeperAI’s EscrowAI solution helped Novartis navigate these challenges and assured that the data would never be seen, and the associated IP would also be protected. Novartis has progressed in its studies in this field. “It was an extremely powerful demonstration of what’s possible,” Chalk says.

Chalk is also excited about the potential for confidential computing to assuage concerns related to HIPAA compliance because the patient information is never exposed, never seen, and is always under the control of the data steward. Such sightless computing might convince lawmakers to modify HIPAA in the future, Chalk hopes.

The Future of Confidential Computing in Healthcare

As for what’s coming down the pike, Chalk expects confidential computing to do its job at the edge, too. “Institutions that aren’t ready to push all their data into the cloud can leverage confidential computing for AI analytics at the edge,” she says. “It also allows algorithm developers to deploy securely into jurisdictions with restrictive data controls.”

Until today, healthcare has had to work with incomplete data. “Our healthcare treatment system has been built on a small percentage of the available information,” Chalk points out. But all that will change as confidential computing enables AI to realize its full potential in the field.

And the cancer survivor could not be happier about the brilliant possibilities, including the era of precision medicine. “The treatment that may be effective for you may not be effective for me. And so rather than all of us being treated as an average in a bell curve, we’re going to be able to be treated as a unique set of one,” Chalk says. “That gives me great comfort and great hope about the future of healthcare.”

And unlike Chalk, our healthcare outcomes need not depend mostly on luck.

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Predictive Maintenance at the Edge Keeps Devices Running

Gas compressors are a critical component in a multitude of industrial environments. Compressors pressurize and push gas into pipelines, portable tanks, and ships that transport liquefied natural gas (LNG) around the globe for energy production.

“Compressors are quite expensive devices. They cost millions and millions, and are usually at the heart of the production process,” says Alexander Bergner, Director of Product Management at TTTech Industrial, a company that specializes in real-time data collection in industrial workflows. “In LNG ships, when they do not compress, they actually have to burn the gas in order not to have too much pressure in the tanks. In a chemical industry, if they don’t have a compressor running, then systems get clogged and you have to take them all apart to clean and recommission them.”

So keeping gas compressors running at their best is just as critical as the essential functions they perform. That’s why predictive maintenance, which uses data collection and analytics to track the condition of compressor components, is increasingly common in production processes that rely on compression.

Not All Predictive Maintenance Solutions Are Built the Same

Predictive maintenance not only helps prevent wear and tear of compressers to extend equipment life, but more importantly, it allows operators to better plan when to replace parts—especially in situations where gas compressors are out at sea for long periods of time and parts can’t be easily replaced.

For instance, with LNG ships, the logistics of predictive maintenance can get complex. As ships sail from one port to another, it’s critical that components do not fail midway through. Predictive maintenance enables operators to dispatch technicians and parts to the next port where maintenance needs to occur, so everything is ready when the ship puts in. Conversely, predictive maintenance prevents replacing parts too early, which can drive up costs.

“Ship compressors are especially sensitive to well-planned maintenance,” Bergner says. “The spare parts need to be there at the right time and they need to be the right spare parts.”

To do this, the predictive maintenance needs to be built with performance and real-time monitoring in mind, which is easier said than done.

#PredictiveMaintenance not only helps prevent wear and tear of compressors to extend equipment life, but more importantly, it allows operators to better plan when to replace parts. TTTech Industrial via @insightdottech

HOERBIGER, a leading supplier of gas compressor components, learned that the hard way when it was looking for a better way to track the condition of its compression components. It wanted to provide a predictive maintenance solution to its customers in oil, gas, automotive, and process industries, where they rely on its cylinders, pistons, heads, and piston rings for compressors that mostly operate at edge sites.

The company built an in-house predictive maintenance solution with custom-designed hardware. However, they needed a next-generation system that could provide the computational power and flexibility to adapt to upcoming needs, Bergner explained.

That’s why HOERBIGER turned to TTTech Industrial, a subsidiary of TTTech Group, which went to work on a prototype to address the company’s specific needs. “They presented their technical challenges, and we sketched the solutions. We even went so far to as to sketch the workflows,” says Bergner.

HOERBIGER needed an IoT solution with edge capabilities since, in many settings, gas compressors operate 24/7 with or without cloud connectivity. TTTech Industrial based the solution on its Nerve edge computing platform, which enabled it to develop a proof of concept in about 100 hours with fewer than 150 lines of code.

HOERBIGER quickly approved the design and retained TTTech Industrial for installation and integration. “We at TTTech Industrial were responsible for providing the data ingestion framework and the storage and visualization framework specific to their needs. Their software engineers focused on developing the algorithms, which actually do the predictive maintenance,” Bergner says.

A Real-Time Edge Platform for Predictive Maintenance

Nerve is an open, secure, and modular edge platform that provides the foundation for myriad use cases, such as maintenance of cold forging tools, implementation of digital twins in manufacturing processes, and remote management of industrial production software.

For the HOERBIGER case, TTTech Industrial provided a Nerve Integration Services Package. The package delivered the architectural underpinnings and edge management software on top of which HOERBIGER built its predictive maintenance application.

The Nerve platform was installed on an industrial PC from MOXA with an Intel® Core i7 processor. The use of Intel processors and hardware were essential in the HOERBIGER because they included the necessary certifications to operate in hazardous environments.

The platform’s Soft PLC module also enabled high-speed data acquisition, which is required to calculate the wear of components such as piston rings and valves. This is possible by measuring cylinder pressure in relation to crank position values at sample rates of 50 KHz. As many as 600,000 samples per second must be processed.

Nerve’s Data Services module processes the data leveraging Nerve’s gateway application, which sends data to the Timescale Time-Series Database for post-processing to estimate compressor wear. Data visualization is then enabled by the Grafana system integrated in Nerve.

Another significant benefit of using Nerve, whether for HOERBIGER or other customers, is that the platform runs in cloud-connected systems as well as air-gapped edge environments. In some environments, air-gapping is necessary, accoding to Bergner.

“Imagine you run a fleet of machines. Part of that fleet is air-gapped because it is in critical infrastructure with no easy or legal possibility to bridge that air gap,” says Bergner. “You still want to have a homogeneous way of dealing with all the machines out there, so your solutions have to be capable of operating online, offline, or air-gapped.”

Nerve’s edge functionality makes it possible to securely collect and analyze data without a connection. But customers can access preprocessed edge data through a web portal linked to a central management system running on-premises or in the cloud.

Predictive Maintenance as a Service

Bergner estimates the HOERBIGER predictive maintenance solution will eventually reach thousands of locations, depending on how many customers sign up for it. Customers can buy the predictive maintenance as a service or they can use it internally with their own maintenance technicians, he explains.

Predictive maintenance is key to both HOERBIGER and its customers, enabling the company to deliver critical gas compressor parts at precisely the right time. “It allows companies to plan the logistics for replacement correctly,” Bergner says. “These are very critical parts, and you do not want the compressors to fail.”

Going forward, Bergner foresees more predictive maintenance use cases built on Nerve for different industries. Because of its edge capabilities, Nerve will enable companies to deliver cybersecurity updates and add functionality to their edge devices as needed. This will help future-proof operations so they can keep adapting as the technology evolves, Bergner explains.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

The COM-HPC Standard for Embedded Devices at the Rugged Edge

There’s good reason why we don’t use a fax machine to send business messages anymore. Sure, it can do the job, but it does not hold up to the breakneck speed of today’s commerce.

A similar argument holds true with respect to today’s computing landscape, argues Christian Eder, Director of Marketing and one of the cofounders for congatec, vendor of embedded computer boards and modules, and Chairman of PICMG COM-HPC.

With data being generated at an unprecedented rate, the ability to transmit and store data is becoming a major bottleneck in our ability to make use of it. Typically, data would be transmitted to a cloud or data center for analysis, but that type of infrastructure cannot always keep up with the amount of data at our fingertips today.

The Growth of Edge Computing and Embedded Devices

To overcome latency and the costs of routing large amounts of data to and from the cloud, AI inferencing is increasingly happening at the edge. And as a result, the computing muscle that was once the sole purview of servers in temperature-controlled data centers is now expected in much smaller, rugged form factors capable of tolerating high data throughput at a wider range of temperatures.

Up until recently, these embedded edge devices leveraged the COM Express computer-on-module standard to deliver mid-range edge processing and networking capabilities in fields like retail, transportation, and robotics. But with recent advancements in technology, this standard is starting to buckle under the demands of increased compute and volume of data throughput at the edge.

“With COM Express, we reached the top edge; it cannot grow anymore,” Eder says. “The demand for higher data throughput, higher bandwidth, low latency, and higher computing power at the rugged edge has led us to define a new standard without worrying about backward compatibility. We needed new connectors and new sizes to provide a new level of performance and functionality.”

The COM-HPC Standard for the Rugged Edge

To address the need for a new standard, PCI Industrial Computer Manufacturing Group (PICMG) brought major players from the industrial embedded market together to create the COM-HPC standard. The newer standard is designed to withstand the greater demands on today’s embedded systems and can take the heat (quite literally) to deliver lightning-fast computing, Eder explains. “COM-HPC is the stable standard for the next 10-15 years  as the technology advances,” he adds.

In addition, a newer iteration of COM-HPC, its Mini equivalent, is making room for the smallest high-performance standardized module currently possible, nearly the size of a credit card. Useful for Small Form Factor (SFF) designs, the standard can accommodate power and space constraints while still delivering IO and computing power.

“The beauty of a #modular concept is that you can upgrade your existing #applications to a different power envelope without throwing away complete solutions” — Christian Eder, @congatecAG via @insightdottech

congatec recently introduced its first COM-HPC Mini modules to give customers a high-performance boost for their space-constrained solutions. “The intention here was to make it the right kind of platform to use the low-power range of the 13th Gen Intel® Core processors,” Eder says. “It saves costs and saves real estate but is limited to low-power CPUs, yet is extremely powerful when it comes to computing.”

A Rugged Edge Partnership

congatec’s partnerships with companies like Intel have been critical in developing solutions with the latest computer-on-module standards, giving them better insight into what’s around the technology corner.

“Participating in early-access programs enables us to have products already engineered by the time Intel is announcing the technology, which allows our customers quick access to the latest technology and reduces time to market,” Eder says.

Knowing what is on the horizon allows congatec to develop modules with standards that are more rugged. And with 13th Gen Intel Core processors, congatec’s COM-HPC modules gain performance improvements, AI inferencing abilities, improved GPUs, and the ability to withstand harsh temperatures from -40C to 85C. The processors also facilitate the high-graphics requirements that might be required of video streaming and analytics applications.

Migrating to COM-HPC

Given the many advantages of the COM-HPC standard, Eder thinks it’s time to migrate performance-hungry systems from COM Express to COM-HPC. “The beauty of a modular concept is that you can upgrade your existing applications to a different power envelope without throwing away complete solutions,” he says. “Instead you simply swap out modules.” When switching gears from COM Express to COM-HPC, the carrier board has to be modified. But upgrading to future CPU technologies alone can be as simple as changing the compute module only.

Instead of changing a complete system, a modular concept helps prevent waste and is more environmentally responsible, Eder points out. Heat spreaders or cooling interfaces are also part of the module ecosystem standards, adding additional layers of sustainability.

While the actual swap-out might not be too heavy a lift, the other components may need to be adapted to make full use of the new performance and faster interfaces.

congatec facilitates adoption by providing reference carrier boards “for an easy start and to check out all dedicated functionalities required for the individual applications,” Eder says. congatec also hosts an academy that teaches carrier board developers about best practices for design with the COM-HPC standards and within its ecosystem. The training focuses on standards-compliant carrier board design, which is essential for building interoperable, scalable, and durable custom embedded computing platforms, according to Eder.

Additionally, congatec works with partner networks, especially those that might have specialized knowledge of standards implementations requirements in different fields like transportation, communications, and healthcare. “Obtaining precertifications in certain industries like railways enables faster adoption of the products. Given congatec’s computer-on-module products can withstand shocks and vibrations, selected SKUs even received certification for use on train systems”, Eder says. Precertifications are especially useful to systems integrators who look for compatible and comprehensive tech stack solutions.

As the future of computing moves to the edge, the embedded edge devices must be able to manage crushing workloads in harsh environments. But COM-HPC stands ready to tackle the challenges. After all, Eder points out, “it’s created by embedded specialists to simplify the use of latest embedded technologies.”

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Telehealth Innovations Transform Remote Care

In the realm of cutting-edge technology, breakthroughs are often fueled by a powerful vision. For one company, it’s building the components of a smart city, one that promotes the well-being of individuals, a more livable community, and a sustainable environment.

Though their “Unified Communication of Things” platform, beamLive pursues these goals by harnessing the latest in edge computing and AI technology to create telehealth innovations that deliver personalized, remote care. Spearheading this transformative effort is beamLive CEO Mehrdad Negahban, who envisions a future where healthcare is enhanced by digital solutions.

At the heart of beamLive’s mobile IoT communications approach is the beamRx – Shahin 101 solution, which aims to enhance the out-of-hospital telemedicine experience for patients and healthcare providers alike. By integrating edge-computing, AI, and machine learning, the system provides doctors with real-time biometric data during virtual consultations. This enables healthcare practitioners to respond rapidly and accurately to changes in a patient’s health.

By integrating #EdgeComputing, #AI, and #MachineLearning, the system provides doctors with real-time biometric #data during virtual consultations. beamLIVE via @insightdottech

The solution’s brand itself carries a moving story, named after a colleague. “One of our partners, Shahin Arefzadeh, who was of great help and inspiration, had passed away,” says Negahban. “So, we are dedicating this product line in his honor.”

Digital Healthcare Solutions Transform Nursing Home and Community Care

One notable customer with an upcoming use case that exemplifies the importance and potential of edge computing in healthcare involves a nursing home in New York City. With more than 200 residents, ensuring high-quality, timely care for every individual represents a complex task. Beam’s innovative telemedicine platform can make a measurable difference in the future of the facility’s healthcare delivery.

By deploying the Shahin 101 solution, the nursing home’s care providers will gain crucial access to real-time biometric data and patient profiles. The AI-driven solution combines vital data such as heartbeat, temperature, respiratory rate, blood pressure, from medical IoT sensors, along with comprehensive health histories. This wealth of information, analyzed in real time, will give doctors and nurses a complete view of each resident’s health status. The solution efficiently processes a vast amount of data at the edge, reducing latency and ensuring smooth communication between devices and healthcare providers. The result is timely interventions, earlier detection, and more personalized care.

Negahban explains, “We seamlessly connect all of these sensors, allowing medical practitioners to view real-time data while communicating with patients through video. Our platform bridges the gap between virtual and in-person visits, providing an experience akin to an in-person consultation.”

Beam will use and correlate an individual patient’s information with a broad set of public medical data, such as that from the National Institutes of Health (NIH), to construct comprehensive profiles. By correlating historical and publicly available data with an individual’s unique biometrics, health history, and activities, healthcare professionals gain a holistic view that allows them to make more-informed decisions during virtual patient visits.

With this comprehensive view, providers gain a thorough historical understanding of this individual’s activities over the past 24 hours, week, or month.

“You combine them with the massive amount of NIH data and match that with bios of a person—their age, gender, height, and weight, for example,” says Negahban. “We use real-time or archive data along with the person’s bio and their physical activities.”

Telehealth Innovations from the IoT Edge to the Cloud

The beamRX architecture combines a set of hardware, software, and cloud services. Medical IoT sensors measure patient stats: heartbeat, temperature, respiratory rate, oxygen level, blood pressure, blood sugar level, and EKG. The collected data is processed on Intel small form-factor PCs and edge computing, where pertinent messages and alerts are extracted before being transmitted to the cloud via broadband, Wi-Fi, or LTE. These real-time alerts are a key factor in a patient’s care, color-coded with green for wellness, amber for concern, and red for critical—which can launch alerts directly to designated doctors or a 911 center.

In fact, Intel technology is essential to the beamRX solution. The Intel® OpenVINO toolkit and Intel® Movidius VPU accelerator are the backbone of its AI platform development and performance. Once the data is processed, it’s mimicked on the cloud, under control of a HIPAA-compliant client. Generally, these clients are either hospitals or doctors.

Shaping the Future with Edge Computing and AI

Beyond telemedicine, beamLive’s technology extends its impact to the needs of smart cities. Edge computing and AI technologies make innovation in urban infrastructure, transportation, and citizen services possible. “The overall umbrella for what we’re doing covers a range of sectors, spanning from public safety to logistics, smart transportation, and healthcare,” says Negahban. “These segments are also dependent on real-time information from multiple sources—anywhere people may be, on all of their devices, dynamically updated—to enable a better community experience.”

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Autonomous Mobile Robots Emerge from the Factory Floor

What once seemed liked science fiction is now becoming a reality. Autonomous mobile robots—AMRs—are gaining real traction in the manufacturing space these days. But they are also poised to explode into any number of other contexts—from hospitality to health care—getting more intelligent and more independent all the time. The idea is to reduce the burden on human workers who would otherwise be doing certain repetitive or hazardous tasks themselves, as well as to work alongside those humans.

Unsurprisingly, there’s a lot that goes into getting these robot systems to sense their environments, conduct operations, and implement orders. It requires high-intensity computing from the technology, and flexibility and scalability from the designers. Claire Liu, Product Marketing Manager at congatec, an embedded computer modules supplier; and Timo Kuehn, Systems Architect and Product Manager at Real-Time Systems, a provider of embedded and real-time solutions, explain this fast-changing industrial trend for us (Video 1).

Video 1. Congatec’s Claire Liu and Real-Time Systems Timo Kuehn discuss the key components necessary for successful autonomous mobile robot development and deployment. (Source: insight.tech)

What actually are autonomous mobile robots?

Claire Liu: Autonomous mobile robots are systems capable of operating independently, without direct human intervention. They are equipped with defense sensors, artificial intelligence algorithms, and a sophisticated control system that enables them to navigate autonomously, to perceive their environment, and to make decisions.

Autonomous mobile robots rely on a combination of technologies, such as various sensors—for example, LiDARs or 2D or 3D cameras—to perceive their environment. That sensor data is processed in real time by the computing platform to profile information about that environment. The robot can then use this information to create a map to localize and navigate itself within the environment.

The manufacturing industry is increasingly interested in autonomous mobile robots because they can do things like material-handling tasks—picking up and delivering raw material and in-process products on the production line. These are repetitive tasks that used to be executed manually and can pose a risk to workers’ health and safety. Now workers don’t have to waste their production time to do that manual work; they can focus on highly skilled and more value-added tasks instead.

Using autonomous mobile robots in the manufacturing environment streamlines the manufacturing process and can increase productivity, improve operational efficiency, and enhance workers’ safety.

Talk about the software architecture that goes into AMRs.

Timo Kuehn: A lot of software goes into AMRs, of course. There are various functionalities, like perception, as Claire mentioned. The robot has to perceive its environment in order to know what’s going on; it has to find out where it’s situated at any one moment; it needs to find out where to move to. The movement itself, the motion control, is very important: There’s obstacle avoidance, of course; there’s also interaction with humans, depending on the type of robot and the diagnostics.

Those software functions have to be mapped by corresponding software modules, and often they have very high requirements in terms of timing and resource usage—even competing requirements. For example, if one software module needs a lot of performance, while a different software module needs a deterministic response in a timely manner, you cannot just throw everything in and expect it to work. It is quite complex.

For motion control, especially, it can be quite challenging. It needs determinism: It needs to react to sensor signals within a predefined time frame. And the time frame depends on various factors like: Does it have wheels? Does it have axes? How many axes have to be controlled? What is the speed of the AMR? What precision is needed? Is the device moving in two dimensions or three dimensions? Is the load dynamically added or unloaded?

Typically, real-time operating systems are used in order to have party-based scheduling and to make sure that deadlines are never missed. Critical tasks, like perception or motion control, get higher priority so that they aren’t interrupted by lower-priority tasks. This resource allocation and optimization is provided by the operating system or the software architecture.

“Using #autonomous mobile #robots in the manufacturing environment streamlines the #manufacturing process and can increase productivity, improve operational efficiency, and enhance workers’ safety” — Claire Liu, @congatecAG via @insightdottech

Tell us more about taking a modular approach.

Claire Liu: The congatec computer modules seamlessly leverage Intel processor-technology scale—from low power to high-computing performance—enabling developers to develop their robots to work longer and smarter and to perform complex tasks with higher proficiency and efficiency.

The Intel® 13th Gen Core processors are an ideal solution with congatec’s computer modules because they combine power, efficiency, flexibility, and performance. MrCoMs can now benefit from these latest Intel processors to run more applications simultaneously and to run more workloads and more connected devices.

Developers can quickly and easily adapt to the latest Intel processor technologies through a simple module change, and they can add intelligence to their autonomous mobile robots even after years of operation. Additionally, there’s the Intel OpenVINO toolkit, which provides optimized AI influence models and comprehensive support for developers.

What other tools and technologies go into developing autonomous mobile robots?

Timo Kuehn: The development of AMRs requires a combination of hardware, software, and connectivity. In terms of hardware, there’s the computing platform, the chassis, the motor, the sensor power system, and, of course, whichever sensors are being used depending on the requirements of the application. The software side deals with perception, localization, path planning, motion control, and obstacle avoidance. Diagnostics and interaction with humans also play very important roles. So integrating and managing all of those functions can get quite complex.

AMRs are battery powered, so adding a lot of controllers doesn’t make sense. Those controllers need to be connected, which adds weight and increases the size, costs, and complexity. So multiple functions have to be consolidated on fewer processors.

And here is where an embedded real-time hypervisor can help a lot, integrating multiple workloads on a single processor. There are many advantages to that functionality—isolation and security, for example. So, say, perception and motion control can run securely separated from each other in their own virtual machines, making sure that when one VM needs a lot of load or creates a lot of load, the other one is not affected and can still meet its deadlines.

And this is really crucial. Imagine there’s a signal from a sensor and the reaction from the AMR or from the controller comes too late. This can lead to a crash—even to injuries, when humans are involved. It also helps with performance optimization and load balancing; every VM can get dedicated resources to meet timing and performance requirements.

What are some of the use cases you’re seeing with AMRs?

Claire Liu: Autonomous mobile robots have proven to be versatile in various industries. There’s the material handling in the manufacturing environment that I mentioned earlier, and even collaborative assembling as well. There’s logistics and fulfillment for e-commerce. During the pandemic, autonomous mobile robots were utilized for delivering medical supplies and medication and for assisting with patient care. And there are more and more applications in other areas like agriculture, hospitality, and retail. New use cases are consistently emerging.

Timo Kuehn: Environmental monitoring is a good use case for AMRs, in order to collect data on air quality, water quality, or soil conditions. Or in hazardous environments—for example, for inspection of power plants—which reduces the risk for human workers. They can be used in public places to provide real-time video feeds. Or in large facilities they can be used in last-mile delivery to transport packages. They can assist in material transportation, also in construction projects. There are really a lot of different use cases, and I agree with Claire that there will be even more in the future.

Where can we expect this field to go over the next couple of years?

Claire Liu: There will be new and exciting possibilities in the field of AMRs in the near future. Technological development will evolve rapidly in the robotics area, with a modular approach to the software-architecture design. Autonomous mobile robot companies will adapt to the fast-changing environment and bring this cutting-edge solution to life with great scalability.

Timo Kuehn: Of course, it’s hard to predict, but I’m sure there will be many advancements in the near future, especially in regard to Intel processors with integrated AI accelerators. This will lead to enhanced perception and object recognition, more intelligent path planning and optimization, and adaptive-learning capabilities. What we can also imagine is improved collaboration between humans and robots—things like the capability to make complex decisions in real time in order to assess situations and execute complicated tasks with only a little bit of human intervention.

To summarize: The combination of virtualization technology, real-time capabilities, and integrated AI accelerators has a high potential for completely new types of autonomous mobile robots. They will become more intelligent, adaptable, and capable of performing complex tasks with high precision and efficiency.

Related Content

To learn more about autonomous mobile robots, listen to Inside the Development of Autonomous Mobile Robots, and read IoT Virtualization Jump-Starts Collaborative Robots. For the latest innovations from congatec and Real-Time Systems, follow them on Twitter at @congatecAG and LinkedIn at congatec and Real-Time Systems GmbH.

 

This article was edited by Erin Noble, copy editor.