Mobile Edge Servers Empower First Responders

First responders face a uniquely challenging edge environment. The nature of their work—such as fighting fires or handling medical emergencies—requires equipment with unquestionable reliability. But the unpredictability of their environments makes robust connectivity anything but guaranteed. Emergencies can arise anywhere, and each location has its own mix of communication technologies.

“They may have to stitch a variety of networks together, usually radio networks,” explains Chris Ericksen, Chief Revenue Officer at Klas, a provider of rugged technology. “They can’t assume that they’re going to have radio spectrum, Wi-Fi, or any other particular capabilities.”

This unreliable network environment impedes efforts of firefighters, paramedics, and other first responders to create coordinated, data-driven responses for society’s needs. “A top priority in any emergency is understanding the state of play,” says Ericksen. “First responders need a real-time map of data assets in the area, video feeds from those assets, and an analysis of that video overlaid on old records to see the differences. If they’re in a space where data might not be in the same language, they need to translate that data on the fly.”

To accomplish these tasks without a reliable high-speed backhaul, first responders need enterprise-grade computing at the mobile edge. To meet these demands, Klas has brought its experience in defense applications to the first-responder sector.

“First responders are becoming more like defense operations,” notes Ericksen. “They need to make decisions with the best data possible, even with limited connectivity.”

Defense-Grade Communications at the Mobile Edge

Its latest offering, VoyagerVM 4.0 is a portable server designed to provide enterprise-grade computational power, data throughput, and storage at the edge.

“Historically, there was a clear distinction between the headquarters and the edge. The farther first responders got from headquarters, the more capabilities they would lose,” says Ericksen. “Our goal is ubiquitous infrastructure, with the same capabilities everywhere,” including machine learning and artificial intelligence.

To achieve this ambitious aim, VoyagerVM 4.0 brings together the latest Intel® Xeon® D processor, up to 100 Gbps in aggregate throughput, and up to four NVMe devices for high-speed storage. These technologies combine to create a platform that delivers impressive levels of computing and networking in edge environments.

“#FirstResponders are becoming more like #defense operations. They need to make decisions with the best data possible, even with limited #connectivity” – Chris Ericksen, @klasgroup via @insightdottech

Although these capabilities of VoyagerVM 4.0 resemble those of a traditional server, its physical design is remarkably different. Compared to a 19” rack server, VoyagerVM 4.0 is significantly smaller, measuring just 7.4” (188 mm) wide. The server is also remarkably sturdy. It is designed to military standards, enabling it to withstand drop, shock, and vibration with no degradation in operational performance.

This compact, rugged design makes the server well-suited to first-responder use cases. It can fit into ambulances, mobile command centers, and other cramped vehicles. It can handle bumpy rides and temperature fluctuations from -26 °F to 122 °F (-32 °C to 50 °C). And it keeps the power load on the vehicle to a minimum, drawing only 110 W.

These advantages flow directly from the capabilities of the Intel Xeon D processor, which features a low-power, heat-tolerant design optimized for the rugged edge.

Security at the Forefront

To keep first responders safe, digital security is paramount. As Ericksen explains, “Intel continues to excel from a security perspective,” noting that Klas leverages Intel technologies for secure boot, data encryption, and secure software enclaves.

Klas has also developed its own technologies to enhance security. For example, the company developed its own version of OpenBMC, an open-source project for hardware management. This specialized firmware monitors the physical state of a computer and communicates with the system administrator through an independent connection. And since Klas created its own version, they can rapidly manage any vulnerabilities associated with it.

These same technologies help streamline system deployment and management. The company has developed a software platform for automation named Blackrock, which leverages the security features of Intel Xeon D processors and is closely integrated with OpenBMC. Among other benefits, Blackrock enables quick and efficient provisioning of servers for deployment in the field.

The end goal is to help first responders focus on people, not tech. “First responders are not IT people, they just use tech to do their job,” observes Ericksen. “So we need to make it as easy as possible.”

Data-Driven Decisions at the Mobile Edge

With technology like VoyagerVM 4.0, first responders gain critical insights for better decision-making in emergencies.

The design of the solution demonstrates the importance of helping responders concentrate on their primary goal: saving lives and safeguarding communities. Thus, AI and other applications newly enabled by rugged edge servers signify not just a technological leap but a pivotal enhancement in how first responders can perform their duties more efficiently and safely, and directly benefit communities in times of need, Ericksen explains.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Machine Vision Software Optimizes Refinery Operations

The refinery industry knows that when profit margins are razor-thin, even the smallest efficiencies from every aspect of business operations add up quickly. The sector is squeezed by a host of challenges that range from labor shortages to outdated infrastructure. Companies also race to meet rigorous global sustainability standards. To tackle these issues, the industry leans on automation and other advanced technologies that can mine granular efficiencies in practically every aspect of everyday operations.

Against this backdrop, at the Impala Platinum Base Metal Refinery in South Africa, computer vision and edge AI technology ensure efficient operations. A process optimization project with Britehouse Mobility, a subsidiary of NTT, shows how machine vision software can help improve the complex production of ammonium sulfate—an inorganic salt with a number of commercial uses.

#MachineVision #software can help improve the complex production of ammonium sulfate—an inorganic salt with a number of commercial uses. @GlobalNTT via @insightdottech

The Challenges of Producing Ammonium Sulfate

Used as an agricultural fertilizer and in the manufacturing of vanadium, ammonium sulfate is a byproduct produced in the refinery. The product produced at the facility must meet specific standards mandated by the Department of Agriculture, Forestry, and Fisheries. A key criterion affecting the ammonium sulphate produced at the plant is its nickel content, which must not exceed 200 parts per million (ppm). But nickel particles attach to the smaller ammonium sulfate crystals. An effective method to prevent the nickel to report to the final product is to screen the product using a vibrating screen.

Vibrating screens can cause a lot of dust, especially when very fine particles are present. The vibrating screens are therefore completely enclosed, which makes it difficult to see when the screening panels are blinded, and typically can be detected only by opening the screen while it is offline. Blinded screen panels allow the fine particles containing nickel to report to the final product and, if detected too late, will impact the quality of the ammonium sulfate.

Opening the screen periodically can be time-consuming. Consequently, an online monitoring system enables the production team to quickly react should there be any pegging or blinding in the screen. An additional advantage is that the cameras can easily detect other issues with upstream processes as the vibrating screen is the final step in the process.

Machine Vision and 3D Cameras in Action

The Britehouse Mobility solution shows the use of volumetric cameras and can prevent such problematic pegging and blinding of vibrating mesh screens, which can deteriorate the quality of the final product output. In conjunction with the cameras, a machine vision algorithm studies the output and recognizes when the screen is getting blocked.

Anyone who has taken a picture with a smartphone will appreciate the challenges involved in getting a non-blurry picture when there’s so much shaking involved. The challenge in developing a solution was to find a camera that could perform under such conditions.

“It’s not just a camera…it’s something that is robust enough to survive the brutal forces that come with that environment,” says Donovan Bell, Senior Solutions Architect at Britehouse Mobility. In addition, the camera had to move with the screen, to carry the same resonance, so the pictures are not blurry.

With the camera placed under adequate lighting conditions, the team trained a machine learning model to recognize what functioning and blocked screens looked like. The final results receive a rating so operators can judge the severity of the problem before intervening. Volumetric measurements deliver information not just about the extent of pegging and blinding but about the weight and dimensions of the undesirable nickel. Immediate data processes at the edge and a cloud-based extension, the Britehouse Mobility Atajo OnEdge platform, delivers visualizations and alerts operators as necessary.

“The solution improves safety and efficiency because it reduces the number of times people have to interrupt operations, take lids off, and have a look under the vibrating screen,” says Marco Capazario, head of the IoT division at Britehouse Mobility.

In addition, workers can now be reactive instead of proactive. It’s about more than just monitoring blockage, Capazario says. “It’s giving them insights into failures upstream and the ability to dig deeper in terms of root cause analysis,” he adds. Among the many questions the Britehouse Mobility/NTT solution can answer: Why are we experiencing efficiency losses at certain times and what’s happening upstream of the plant that’s causing and creating this? Can we identify this through data insights?

Collaboration Demonstrates Refinery Innovation

Through Intel’s guidance, Britehouse Mobility/NTT worked with Framos, which installed Intel® RealSense cameras in industrial-grade enclosures. The collaboration helped Britehouse Mobility zero in on a camera faster. The RealSense camera fits the bill because of its volumetric capabilities and depth perception.

Because the object-detection and volumetric-sizing application is not too complex, the solution does not need massive amounts of compute. “We are not trying to analyze real live video at 60 frames per second. But if we do, we can send that information up to the cloud and utilize Intel toolsets that can render those results for us,” Bell says.

The cloud-based Atajo OnEdge platform ingests and stores data from the gateways for historical analysis. Through dashboards and reports, the platform enables users to track long- and short-term trends.

Endless Applications for Computer Vision

While the Britehouse Mobility/NTT solution is specific to the Impala use case, its beating heart related to machine vision can apply more broadly. For Impala, the team is working on another application that relates to monitoring the safety and site compliance of mobile cranes.

“We have industrialized the hardware and have a software layer that’s highly configurable so we can bolt on modules and deploy specific applications rapidly,” Capazario says. “There are big opportunities for us to improve a whole host of processes.” And implementations do not have to be restricted to the refining industry alone. Manufacturing and other sectors are also ripe for the picking. “It’s almost endless the applications in which it can be used,” Bell says.

 

Edited by Georganne Benesch, Editorial Director for insight.tech.

Holographic Displays Bring Physical Retail to Life

3D holographic displays reinvent in-store digital display technology. In a world where convenience and personalization are no longer just desirable, they are expected, holographic technology provides new and interactive ways for businesses to strengthen their customer relationships while enhancing brand awareness. And thanks to partnerships with companies like Proto, a holographic communications platform OEM, and Wachter, Inc., a leading national IT solutions integrator which is at the helm of implementing advanced solutions for the best customer technology outcomes, it’s now possible to implement these advanced solutions.

“There’s a mindset that 3D technology is either far off in the distance or too costly and too cumbersome to deploy into a scaled retail environment. With Proto and Wachter, we’re able to bring that fully immersive, engaged experience directly to the customer at an affordable cost—without any army of production folks to put it all together,” says Matt Tyler, Director of Strategic Innovation and Business Development at Wachter.

Proto makes it possible with its holographic communications platform that empowers retailers to deliver fully immersive shopping experiences. This innovative technology allows shoppers to touch, explore, and interact with products in ways previously unimaginable. Moreover, it can live-beam in shopping consultants or celebrities, enabling customers to engage in real-time conversations and gain deeper insights into products.

At the core of this transformative experience is Proto’s partnership with Wachter. Leveraging Wachter’s team of digital experts, retailers and businesses can design tailored solutions that address their specific needs and space constraints. With a deep understanding of digital challenges, Wachter empowers clients to unlock the full potential of Proto’s technology, ensuring seamless integration and optimal performance.

Shopping Redefined: The Retail Omnichannel Experience of Your Dreams

These 3D holographic displays are part of a much bigger movement in the retail space called omnichannel experiences. Harnessing built-in AI capabilities, shoppers can now experience a whole new level of interaction within the store environment. They can virtually view products, engage in real-time chats, and receive personalized recommendations tailored to their individual preferences and inquires. This not only enhances the experience but also enables shoppers to make more-informed purchasing decisions.

Harnessing built-in #AI capabilities, shoppers can now experience a whole new level of #interaction within the store environment. @WachterInc via @insightdottech

Additionally, for retailers, integration of AI mitigates risks associated with carrying higher-cost inventory. By understanding virtual product views and tailored recommendations, retailers can optimize inventory management strategies while effectively meeting customer demand.

“We’ve been thinking a lot about how we can streamline the retail process to be truly omnichannel—specifically in situations where businesses want to test new, high-value products where the inventory cost might not be justified in every store,” says Tyler. “How do you display that inventory without having to show it on the floor, and how do you educate your customer about the product?”

Proto recently announced the latest iteration of its holographic communications platform, the Proto M, a 21-inch display designed to be more accessible and affordable for retailers to implement (Figure 1). Retailers can display the Proto M on cosmetic and jewelry counters, for example, or even on shelves. Target, a general merchandise retailer, recently put the power of Proto M technology on display, live-beaming in a celebrity during the holiday season to entertain and educate customers to purchase specific products.

Proto M display with picture of a pair of women's boots.
Figure 1. The Proto M features versatile mounting options to give businesses the perfect view for their content. (Source: Proto Hologram)

For retailers and brands looking to deliver life-size experiences, Proto also offers Proto Epic, a standalone seven-foot unit powered by Intel® processors for performance and Intel® RealSense cameras for computer vision capabilities. The cameras can capture high-fidelity analytics about shopping behavior to inform business decision-making.

Empowering 3D Holograms Through Technology Partnerships

Proto’s patent-pending technology eliminates the need for 3D capture, making it easier and more cost-efficient to perform content creation through a single 4K SLR camera. The partnership with Wachter is crucial to providing the necessary expertise to bring these solutions to life.

Wachter delivers a full lifecycle, end-to-end solution, including proper analysis, design, engineering, installation, and maintenance to even the most difficult scenarios. The company can walk customers through the entire process—from creation to management, delivery, and playback of interactive hologram content. To ensure the success of deployments, Wachter verifies the power, network, and infrastructure—whether it’s just one deployment or a fleet of deployments across various locations.

“To take a product like Proto to market at scale, you need a partner that understands the entire managed-service ecosystem surrounding the product. Wachter has the depth and reach to go out and physically deploy the products, make sure all the infrastructure is in place so the entire system can operate properly,” says Tyler. “It just ends up being a winning team.”

Beyond Products: Holographic Experiences Reshape the Future

Expanding beyond retail, Wachter and Proto bring 3D hologram experiences across many industries. For example, in higher education, the technology can be used to beam in guest lecturers or beam out professors to multiple campuses. In stadiums, it can provide the ultimate fan experience. Even in manufacturing, this technology can bring in overseas experts to help perform maintenance of systems. But no matter the solution, businesses can trust and rely on Wachter to bring their ideas into reality.

“We’re taking digital displays beyond a screen to a truly unique, one-of-a-kind omnichannel experience that blends the physical with the digital,” says Tyler.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Self-Service Technology Speeds Post Office Operations

At today’s quick-serve restaurants, you can order, pick up, and pay for food without help. At grocery stores, you can scan goods at the self-checkout station and be on your way. But if you want to drop off a parcel at the post office, it’s a different story. After waiting in a long line, you have to answer a series of questions: Regular or express? Do you want insurance? Tracking? Does your package contain any of the following items?

Postal queues are a pain point not only for customers but for post offices, which are chronically short-staffed and operate under intense deadline pressure. In the 21st century, why can’t there be a more efficient way to mail packages?

In fact, there is. Modern edge AI and computer vision systems can automatically process packages and take payments without requiring help from busy postal employees. Delivery service companies can also use the systems, which saves time for everyone and improves the accuracy of package measurements.

Improving Package Processing with Self-Service Technology

People prefer self-service not only for efficiency but because it gives them a sense of control. Research has shown that even when cashiers are available, grocery customers often head for self-checkout, says Ann Snitko, Chief Product Development Officer at international self-service solutions company Omnic.

“#SelfService also makes a big difference to postal carriers, since the “first mile” of delivery—the drop-off and processing phase—is the most expensive.” – Ann Snitko, OMNIC via @insightdottech

Self-service also makes a big difference to postal carriers, since the “first mile” of delivery—the drop-off and processing phase—is the most expensive, Snitko says.

That’s because preparing packages for delivery is time-consuming, and a worldwide shortage of service workers makes the problem worse. Customers have no choice but to stand in lines that may extend outside the building during busy seasons. Processing time also varies considerably by location. For example, in the logistics companies in Dubai, it may take up to two hours to get to a clerk, Snitko says.

To optimize first-mile service, Omnic created the Intel-powered OMNI Drop Off solution, an AI and computer vision-based device where, instead of waiting in line, customers can process their own packages for delivery in under two minutes (Figure 1).

photo of the OMNI Drop Off self-service postal solution
Figure 1. The OMNI Drop Off self-service postal solution speeds parcel processing for customers and postal operators. (Source: Omnic)

After entering their information on a computer screen—or through a voice assistant—customers place their package on a built-in scale. Using an Intel® RealSense computer vision camera, the Omnic software measures its dimensions, and scans the contents for prohibited items, notifying a representative if there’s a problem.

“Prohibited contents are a big concern, especially for international deliveries, because carriers are responsible for what they ship,” Snitko says.

A printer creates a label and the system calculates the shipping price. Omnic’s software can be customized to add local taxes and fees, and customers can pay with either a credit card, an app like Apple Pay or Google Pay, or a QR code. They can also print out a tracking number and choose to receive text notifications about delivery. Personal information does not need to be stored in the system, and Omnic follows all local privacy laws, Snitko says.

When a package is deemed ready to go, the door to a secure storage locker beneath the counter pops open and the customer deposits it for pickup. At post offices that don’t want lockers, customers leave their packages at a designated spot.

In UAE, Switzerland, and Canada, where the OMNI Drop Off has been deployed, total processing time for customers averages 60 to 90 seconds.

Postal operators and delivery company workers can also use the OMNI Drop Off, and when there’s a customer account on file, they can be done in just 15 to 30 seconds. In addition to helping workers avoid manual errors and label reprinting, the system records and stores the data it receives and can generate reports to help managers track operations and speed.

“According to our calculation, the OMNI Drop Off reduces four full-time employees, eliminating the need to hire additional staff,” Snitko says. Post offices and delivery companies can customize the OMNI hardware and add their own branding. Omnic also offers a packing station with supplies, providing customers with an all-in-one packaging and mailing solution.

Extending Computer Vision Solutions

Similar Omnic technology can be used for other types of deliveries, and the company now offers 25 different retail solutions, ranging from temperature-controlled food lockers for restaurant deliveries to clothes storage for laundry services. The company continues to optimize its computer vision models by using the Intel® OpenVINO toolkit, gaining an advantage with Intel products, technologies, and tools to further improve the self-service experience.

One of its latest innovations is parcel lockers for homes and apartment complexes. While some carriers already provide parcel lockers, they are brand-exclusive. Omnic parcel lockers are designed to be vendor-neutral, allowing customers to receive packages from all delivery services in one place.

The company is also working on biometric authentication, which could make it easier for locker owners to retrieve their goods. It is also developing predictive analytics to improve last-mile logistics. “Markets everywhere are evolving towards self-service,” Snitko says. “With AI and computer vision—running on Intel-powered systems—businesses can optimize operations, enhance security, and improve the experience of customers and employees.”

 

Edited by Georganne Benesch, Editorial Director for insight.tech.

Optimizing Surgical Teams: AI’s Role in the OR

When you or a loved one faces surgery, you naturally want to make sure that you have the most skilled surgeon. Not to mention that it is in the most up-to-date facility with the most sophisticated technology to ensure the best possible outcome. After all, the stakes can be incredibly high. So it’s disconcerting to think that, until recently, surgeons lacked decision-support resources in the OR.

While medical use of technology has evolved rapidly over the past few years, the surgical field has been a little slower to adopt these advances than some other industries. Surgeons are used to relying on the skill of their hands and the knowledge gained through experience—with good reason. But medical technology isn’t all about robot arms and AI-guided surgery; there’s a lot to be gained just by freeing healthcare data from its traditional silos and giving surgeons access to that information where and when they need it most—in the OR.

We talk more about this with Dennis Kogan, Founder and CEO of digital surgery platform provider Caresyntax, as well as the dynamic challenges of the operating room, the importance of a good partner ecosystem, and how AI-assisted surgery can improve patient outcomes (Video 1).

Video 1. Dennis Kogan, CEO of Caresyntax, discusses the integration of real-time AI-driven data into surgical procedures, emphasizing its critical timing and impact on surgeons. (Source: insight.tech)

How are technological advancements in the OR changing healthcare expectations?

My dad is a surgeon, and years ago when I was in college, I was talking to him about how much decision support athletes get around things like performance management, situational awareness, and analytics. And he told me, “We have nothing like this in surgery. We have very interesting and important medical devices, and we’re continuously getting clinical innovation into our hands, but there isn’t really a lot of data-usage and decision-making support.”

And up until a few years ago that hadn’t changed much. There was a ton of innovation around medical devices, but at the end of the day that was still helping only how surgeons operated with their hands. The advancements that we see now enable surgical teams to have better decision-support mechanisms as well.

I think there is more and more expectation that surgeons cannot just be thinking about the risks of the procedure by themselves. And they do want support; they do want additional information to stratify risks more. And doing it all in their heads is probably no longer acceptable anymore.

What are some of the challenges integrating new technologies into the OR ?

Relative to other types of therapies, patients are probably less aware of what’s happening in the OR. Naturally—they’re under anesthesia. What they want is to understand how likely they are to have a good outcome. And I think they would probably be surprised that not as much integrated decision-making support is available to their surgical teams as they would expect.

The challenge to innovating the surgical field with technology is that surgery is a real-time intervention, and you have to integrate the AI and the software so that it runs in that setting. There should be almost no lag time in the OR. And that by itself is a higher hurdle than for a lot of other information technology used in healthcare. Of course, there is also a pretty high threshold for quality and operational effectiveness.

The surgical environment is also extremely dynamic. So how does a surgeon adapt to a changing clinical picture during the procedure? And it’s not only quantifiable activities and techniques; there’s also communication and teamwork. Surgery is actually a team sport. Part of the outcome depends on how well a surgeon does a certain maneuver, but another part of it is how well they communicate with the nursing staff and anesthesiologist. It’s so complex that it’s almost impossible to foresee how it could be replaced by artificial intelligence in the foreseeable future.

But AI does have a lot to give in terms of bringing the right information and options to the fingertips of physicians, just because of that dynamism. In one day a surgical team may be operating on very different types of patients: a healthy 25-year-old female and then a very sick 85-year-old male. The team has to be able to adjust a lot of inputs and make a lot of decisions.

That cognitive overload can cause suboptimal decisions or mistakes. Probably one out of seven cases has some sort of significant complication—over 15%. And so what we’re talking about here is proactive risk management through situational awareness—through automation. It’s about reducing and removing unwarranted variability driven by cognitive overload and a changing clinical picture. The best use cases we see right now for AI are in showcasing specific information about a given patient and a given procedure to be able to guide the entire pathway for that procedure and have the outcome be better than it would have been without that support.

“The challenge to innovating the surgical field with #technology is that surgery is a real-time intervention, and you have to integrate the #AI and the #software so that it runs in that setting” – Dennis Kogan, @caresyntax via @insightdottech

What is the benefit of combining AI with patient data?

First and foremost, truly integrated surgical-decision support touches on all points of the peri-operative cycle. Because everything that happens before and after a surgery is also extremely important, the best-integrated platforms allow for connectivity between the operating room and the pre- and postoperative spaces, times, and activities.

There are decisions made right before the patient enters the operating room—preparing the right tools, the right medications, having the right people at the table. It also includes the electronic medical record, because that has a trove of data about the patient and his or her predispositions. Then there’s the situation inside the OR, where medical devices and video cameras can be connected. And then afterward: knowing what level of risk that patient is exiting the OR with may change the protocol of how they are going to be taken care of. Maybe they can go home; maybe they need to be in the ICU; maybe they need an extra dose of antibiotics.

So to get the best, smartest insights you have to have a full peri-operative clinical and operational record, but the crown jewel is the intra-operative space—because that of course is the most mission-critical piece, where things can really go wrong. And because of that, and because the OR is real time, it requires an additional level of sophistication. And it’s not, in technical terms, a cloud-friendly territory. It’s all on the edge, because you cannot rely on two-second upload and download from a cloud. So edge computing and the Internet of Things technology toolkit are extremely important here.

At the same time, this technology solution has to be very robust and attractive from the perspective of deployment and cost. Because at the end of the day, anything that is overly expensive or unwieldy—another huge machine being rolled into an already very packed operating room—is just not going to work.

It took us at Caresyntax—with the help of a few technology partners—years to develop this platform in a way that achieves all these parameters. But I do know that it’s possible. Things are still sort of at the beginning, but I think the next decade will probably see every OR being equipped with these kinds of systems. And in 10 years physicians will be wondering how they were working without it.

How can hospitals future-proof this kind of investment?

Every industry goes through a cycle of having a few vendors create kind of a walled garden at first, and then gradually users expect more and more flexibility to add value and to add new applications. I think surgery and healthcare will need to undergo the same change.

The medical-device world has a lot of proprietary intellectual property, for some good reasons. Historically that’s been a dominant mindset for physicians, too—thinking of the operating room through the prism of a device and a vendor, to a certain degree. So the first investment that needs to be made is in reinventing and recalibrating that mindset. The operating room should be seen not as an extension of a leading device platform but as belonging to that horizontal process of achieving the best outcome.

Do you have any use cases or customer examples you can share?

So we’ve been able to show that using these advanced platforms in the OR can lift performance level, and not only for surgeons but also for other physicians and clinical collaborators as well. For example, nurses. After the pandemic a lot of folks entered the nursing workforce without maybe as much training as they would have had before. And then there’s a lot of surgical volume right now because so many surgeries were bumped. So there are a lot of newer nurses who need to come up to speed very quickly. We’re increasingly deploying something like an interactive, step-by-step navigation guide in the OR. Getting step-by-step support in the right moment of the procedure can be extremely helpful to someone who may still be lacking confidence or experience in that setting.

How does Caresyntax work with partners to bring these platforms into ORs?

Being surgery specialists, we have a very good view of what the end applications and use cases should be, but we don’t have as much experience building the infrastructure. We don’t have the benchmarks and comparables from other use cases that may be similar in terms of the rigor and the actual architecture. And an integrated smart-surgery platform that is plug-and-play, that is very smart but not very heavy in terms of hardware content, something that is able to generate information but also has the capability and bandwidth to receive algorithm and produce AI and showcase it in real time—that’s a pretty sophisticated set of requirements.

Intel has been one of the partners that has really plugged in with us, almost inside our team, to make this happen. Designing the architecture, finding the right components, utilizing some of their components—such as OpenVINO that allows for this AI penetration and usage—all of these things are very important. Without a partner like Intel we would have been, at the very least, much slower, looking for every piece ourselves and probably making more mistakes.

Alongside Intel, of course, we also work with cloud-solution providers—AWS and Google Cloud. Because there has to be an edge-to-cloud transition. As I mentioned before, it’s a preoperative, intra-operative, and postoperative space, so you have to continuously go to the edge and back to the cloud and make the information interchangeable. And actually they all collaborate in between themselves—Intel and Google, Intel and AWS—which has been very rewarding as well.

Of course, the pandemic was an impediment to innovation, but that has subsided lately. I think everybody’s really looking at surgery and saying, “It’s not as safe as flying; it’s not as safe as even some other medical procedures. It’s time to improve it.” And it takes an ecosystem of players to achieve that.

What’s your most important takeaway about the use of AI in surgery?

I very often see that folks think of surgery as something that’s been figured out, something that’s reached maturity and doesn’t require innovation. It doesn’t give me any pleasure to say that this is not the case. But there is the opportunity to get surgery to the same place as, say, aviation. I don’t think you and I would accept getting on a plane with a 15% chance of something going wrong in that flight.

It’s a huge problem that has not only clinical implications but cost implications. Next to pharmaceutical therapies, surgical therapies are the second-most-used way of correcting a disease. It’s probably 20%, 30% of all of healthcare spend in the US.

So if we’re going into a surgery, I think we should have the feeling that everything is going to be okay. And that should be backed by real statistics. We really can make surgery safer and smarter. It will have broad impact on patient health for millions of people, and a broad impact on cost as well. There’s ample room for improvement as long as the mindset for innovation is there.

Related Content

To learn more about AI-assisted surgical technology, listen to Staffing AI in the OR: With Caresyntax and follow Caresyntax at @caresyntax and on LinkedIn.

 

This article was edited by Erin Noble, copy editor.

Groundbreaking Technology Powers Smart Building Solutions

Smart buildings are the future—and they are here today thanks to state-of-the-art technology that can pay for itself through more sustainable operations. Smart building solutions collect meaningful data from IoT devices such as access control sensors, plant sensors, physical security systems, conference rooms, lighting, and fire alarms—displaying it in one 360-degree view.

Why is this important? Because a holistic view and real-time data analytics enable businesses to use power more efficiently, reduce a building’s environmental footprint, and lower overall operational costs. And with today’s commercial buildings consuming 35% of electricity in the U.S. while wasting about a third of the energy they use, sustainability is a top priority for almost every organization.

Employees also benefit from smart building technology through a safe and more comfortable environment, efficient use of conference rooms, general upkeep, and a pleasant place to work.

“We are at the helm in making this future a reality—integrating and deploying innovative solutions that offer uplifting environments that intertwine nature, integrate technology throughout, and are fully immersive,” says Matt Tyler, Wachter’s Vice President of Strategic Innovation and Business Development at national solution and service provider Wachter, Inc.

Discovering the Wonder of Wachter

To showcase these revolutionary solutions in action, Wachter turned its 34,000-square-foot Mount Laurel, New Jersey headquarters into a Customer Experience Center—a live model for ideation and discovery. “This is a space where we bring new technologies to life so our customers can view and experience game-changing capabilities before deploying them in their businesses,” says Tyler.

For example, the building’s biophilic lighting system, provided by Signify—a leader in energy-efficient lighting solutions—uses Power over Ethernet (PoE) technology to monitor and control power delivery over the network, reducing drain on the power grid and lowering operating costs.

“This is a space where we bring new #technologies to life so our customers can view and experience game-changing capabilities before deploying them in their businesses” – Matt Tyler, @WachterInc via @insightdottech

The PoE-connected lighting also allows people to reestablish their vital biological connection to the cycles of natural light. For example, the system measures sunlight coming in through windows, changing the intensity and hue of overhead lights to keep pace with its rhythms throughout the day. Studies have shown that maintaining this connection with nature enhances productivity and well-being. And if too much light and heat come in, the blinds are automatically lowered. If it’s still too hot, the software directs the HVAC system to turn on the air conditioning.

Human-centered energy systems don’t end with lighting. Body heat sensors reveal occupancy, switching off lights and heat when the office is empty and adjusting temperature settings as people enter and leave a room. No one must get up to change the thermostat.

Not only does the Signify solution deployed by Wachter uniquely create a pleasant workplace but also enables a more sustainable one. Wachter enables an unprecedented opportunity to increase sustainability, which also lowers operational costs.

Autonomous Decisions with Real-Time Analytics

Wachter’s technology—also deployed in its Customer Experience Center—streamlines maintenance, using a platform developed by AI IoT company Scenera. The solution collects and analyzes sensor data throughout the building with real-time analytics about energy savings, alarm safety, and surveillance in a single dashboard for seamless monitoring. 

With the Scenera platform, occupancy data determines cleaning times based on daily use, reducing the need for help in a time of labor shortages. Data from water and pH sensors—embedded in oxygen-producing green plants in the facility—is sent to a service company when they need attention, rather than on a set schedule, lowering costs.

Wachter uses the Scenera solution as part of its CEC to showcase how data in an office setting can be collected and used to make autonomous decisions. A great example is tracking the number of people entering the building, combined with outside weather data showing rain or snow, and alerting facilities management that the front entrance should be inspected for slipping hazards.

“The building is intelligent enough to say that there is something wrong and here’s what to do about it,” Tyler says. “Video cameras alert security if objects block emergency exits. Leak detectors direct maintenance crews to fix faucets or toilets before employees start to complain. HVAC data is crunched to spot and diagnose problems, summoning technicians while avoiding the need for disruptive troubleshooting.”

Like Signify, Scenera uses advanced Intel IoT technology, which enables AI-powered data analytics from edge to cloud. “Intel is the backbone in making it all happen. It’s proven, reliable technology that works,” Tyler says. “It’s our standard and it’s predominantly our customers’ standards as well.”

Lower Costs with Smart Building Solutions

Today’s smart building solutions aren’t just for offices. In a retail setting, they can add value in unexpected ways. For example, in a U.S. grocery chain, Wachter deployed Signify technology called Visible Light Communications (VLC). This groundbreaking solution leverages “smart” lightbulbs to send customers navigation instructions. LED bulbs emit infrared light, invisible to humans but captured by cellphone cameras. If a customer creates a shopping list on the grocer’s app and enters the store, the navigation system swings into action, displaying the most efficient path for collecting desired items.

The app also uses AI to scan customers’ shopping lists and make upselling suggestions. “If you’re buying pasta and sauce, it might say, ‘This red wine would pair very well with the dinner you’re planning—you can pick it up here for a 10% discount’,” Tyler says.

Grocers can also use VLC navigation to assist the pickers who assemble customer orders for delivery services. Finding people who know the store layout well enough to pick efficiently can be challenging, Tyler says: “With this solution, you can hire someone who has never been in the store, and they can pick groceries as efficiently as a 10-year veteran.”

VLC illustrates how flexible IoT applications have become. Store managers can deploy it through a single LED bulb or swap out all their light fixtures for LEDs. They can connect to power through existing wiring or use a newer method that delivers it through Ethernet cables. PoE lighting can be faster to install, and in many jurisdictions, there’s no need for a licensed electrician to do it.

Either way, LED lights save energy, consuming just 10 to 12 Watts to emit the same amount of light as a 150-Watt incandescent bulb.

A More Efficient Future

The more connected that smart buildings become, the less time people need to spend managing them. “Buildings will eventually be fully aware of their surroundings and inform everyone of what needs to be done to make things right. They are going to be so connected and intelligent that you won’t need a facilities manager,” Tyler says.

By helping businesses understand and deploy smart building innovations, Wachter is at the forefront of this revolution, helping to create a future in which both energy and labor are deployed with maximum efficiency, freeing people to use their highest skills while feeling more comfortable in their surroundings.

 

Edited by Georganne Benesch, Editorial Director for insight.tech.

Generative AI Composes New Opportunities in Audio Creation

Despite what many may think, generative AI extends beyond just generating text and voice responses. Among the growing fields is audio-based generative AI, which harnesses AI models to create and/or compose fresh and original audio content. This opens a world of new possibilities for developers and business solutions.

In this podcast, we discuss the opportunities presented by audio-based generative AI and provide insights into how developers can start building these types of applications. Additionally, we explore the various tools and technologies making audio-based generative AI applications possible.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guest: Intel

Our guest this episode is Ria Cheruvu, AI Software Architect and Generative AI Evangelist for Intel. Ria has been with Intel for more than five years in various roles, including AI Ethics Lead Architect, AI Deep Learning Research, and AI Research Engineer.

Podcast Topics

Ria answers our questions about:

  • (1:52) Generative AI landscape overview
  • (4:01) Generative AI for audio business opportunities
  • (6:29) Developing generative AI audio applications
  • (8:24) Available generative AI technology stack
  • (11:45) Developer resources for generative AI development
  • (14:36) What else we can expect from this space

Related Content

To learn more about generative AI, read Generative AI Solutions: From Hype to Reality. For the latest innovations from Intel, follow them on X at @IntelAI and on LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to the IoT Chat, where we explore not only the latest developments in the Internet of Things, but AI, computer vision, 5G, and more. Today we’re going to be talking about generative AI, but a very interesting area of generative AI, which is the audio space, with a familiar face and friend of the podcast, Ria Cheruvu from Intel. Thanks for joining us again, Ria.

Ria Cheruvu: Thank you, Christina, excited to be here.

Christina Cardoza: So, not only are you AI Software Evangelist for Intel, but you’re also a Generative AI Evangelist. So, what can you tell us about what that means and what you’re doing at Intel these days?

Ria Cheruvu: Definitely. Generative AI is one of those transformational spaces in the AI industry that’s impacting so many different sectors, from retail to healthcare, aerospace, and so many different areas. I’d say that as part of being an AI evangelist it’s our role to keep up to date and help educate and evangelize these materials regarding AI.

But with generative AI that’s especially the case. The field moves so rapidly, and it can be challenging to keep up to date with what’s going on. So that’s one of the things that really excites me about being an evangelist in the generative AI space around what are some of the newer domains and sectors that we can innovate in.

Christina Cardoza: Absolutely. And not only is there always so much going on, but take generative AI for example: there’s so many different areas of generative AI. I almost think of it—like, artificial intelligence is an umbrella term for so many of these different technologies—generative AI is also sort of an umbrella for so many different things that you can do. I think a lot of people consider ChatGPT as generative AI, and they don’t realize that it really goes beyond that.

So that’s sort of where I wanted to start the conversation today. If you could tell us a little bit more about the generative AI landscape: where things are moving towards, and the different areas of development that we have, such as the text based or audio areas.

Ria Cheruvu: Sure. I think the generative AI space is definitely developing in terms of the types of generative AI. And, exactly as you mentioned, ChatGPT is not the only type of generative AI out there, although it does represent a very important form of text generation. We also have image generation, where we’re able to generate cool art and prototypes and different types of images using models like Stable Diffusion. And then of course there’s the audio domain, which is bringing in some really unique use cases where we can start to generate music: we can start to generate audio for synthetic avatars, and so many other different types of use cases.

So I know that you mentioned the tech stack, and I think that that’s especially critical when it comes to being able to understand what are the technologies powering generative AI. So, especially with generative AI there’s a couple of common pain points. One of them is a fast run time. You want these models that are super powerful and taking up a lot of compute to be able to generate outputs really quickly and also with high quality. That pertains to both text, to image, and then all the way to audio too.

For the audio domain it’s especially important, because you have these synthetic audio elements that are being generated, or music being generated, and it’s one of those elements that we pay a lot of attention to, similar to images and text. So I’d say that the tech stack around optimizing generative AI models is definitely crucial and what I investigate as part of my day-to-day role.

Christina Cardoza: I’m looking forward to getting a little bit deeper into that tech stack that you just mentioned. I just want to call out generative AI for audio. You mentioned the music-generation portion of this, and I just want to call that out because we’ve had conversations around voice AI and conversational AI, and this is sort of separate from that area. It’s probably adjacent to it, but we’re not exactly talking about those AI avatars or chatbots that you’re communicating with and that you can have conversations with.

But, like you said, the music composition of this, the audio composition of this—so I’m curious, what are the business opportunities for generative AI for audio? Just so that we can get an understanding of the type of use cases that we’re looking at before we dive deeper a little bit into that tech stack and development.

Ria Cheruvu: Yeah, and I think that you brought up a great point in terms of conversational voice agents and how does this actually relate. And I think it’s really interesting to think about how we use AI for reading in and processing audio, which is what we do with a voice agent—like a voice assistant on our phones compared to generative AI for audio, where we’re actually creating this content.

And I know you mentioned, for example, being able to generate these synthetic avatars or this voice for being able to communicate and call and talk to. And I think, definitely, the business applications for those, the first ones that we think about are call centers, or, again, metaverse applications where we have simulated environments and we have parties or actors that are operating using this audio. There’s also additional use cases for interaction in those elements.

And then we go into the creative domain, and that’s where we start to see some of the generative AI for music-related applications. And this, to me, is incredibly exciting. Because we’re able to start to look at how generative AI can complement artists’ workflows, whether you’re creating a composition and using generative AI to figure out and sample some beats and tunes in a certain space, and also dig deeper into existing composition. So, to me, that’s also a very interesting cultural element of how musicians and music producers can connect and leverage generative AI as part of their content-creation workflows.

So, while that’s not a traditional business use case—like what we would see in call centers, interactive kiosks that can use audio for retail, and other use cases—I also believe that generative AI for music has some great applications in the content creation, artistic domain. And eventually, that could also come into other types of domains as well where we need to generate certain sound bites, for example, training synthetic data for AI systems to get even better at this.

Christina Cardoza: Yeah, it’s such an exciting space. I love how you mentioned the artistic side of this. Because we see generative AI with the image creation, like you mentioned, creating all of these different types of pictures for people and paintings—things like that. So it’s interesting to see this other form that people can take and express their artistic capabilities with generative AI.

Because we talked about how generative AI—you can use it for text or image generation—I’m curious what the development for generative AI for audio is. Are there similarities that developers can take from text or image generation? Or is this a standalone development process.

Ria Cheruvu: That’s a great question. I think that there’s a couple of different ways to approach it as it is currently in the generative AI domain. One of the approaches is definitely adapting the model architectures that are already out there when it comes to audio and music generation, and also leveraging the architectures for other types of generative AI models. So, for example, Riffusion, which is a really popular earlier model in the generative AI-for-audio space, although considerably it’s pretty new, but with the advancements in generative AI there’s just more and more models being created every day.

This particular Riffusion model is based on the architecture for Stable Diffusion, the image-generation model, in that sense that we’re actually being able to generate waveforms instead of images leveraging Riffusion model. And there are similar variants that are popping up, as well as newer ones that are saying, “How do we optimize the architecture that we’re leveraging for generative AI and structure it in a way that you can generate audio sound bites or audio sound tokens or things like this that are customized for the audio space?”

I was talking to someone who is doing research in the music domain, and one of the things that we were talking about is the diversity and the variety of input data that you can give these models as part of the audio domain—whether that’s notes, like as part of a piano composition, all the way to just waveforms, or specific types of input formats as well that are specialized for different use cases, like MIDI or MIDI formats. There’s a lot of different diversity and application of the types of input data and outputs that we’re expecting from these models.

Christina Cardoza: And I assume with these models, in order to optimize them and to get them to perform well and to deploy them, there is a lot of hardware and software that’s going to go into this. We mentioned a little bit of that tech stack in the beginning. So, what types of technologies make these happen, or train these models and deploy these models, especially in the Intel space? How can developers partner with Intel to start working towards some of these generative AI audio use cases and leverage the technologies that the company has available?

Ria Cheruvu: As part of the Intel® OpenVINO toolkit, we’ve been investigating a lot of interesting generative AI workloads, but audio is definitely something that is continuing to come back again and again as a very useful and interesting use case, in a way to prompt and test generative AI capabilities. I’d say that as part of the OpenVINO Notebooks repository we are incorporating a lot of key examples when it comes to audio generation—whether it’s the Riffusion model, which we had a really fun time partnering with other teams across Intel to generate pop beats, similar to something that Taylor Swift would make, to some more of these advanced models, like generating audio, again, for being able to match it to something that someone is speaking. So there’s a lot of different use cases and complexity.

With OpenVINO we are really targeting that optimization part, which is based on this fundamental notion that we are recognizing that generative AI models are big in terms of their size and their memory footprint. And naturally the foundations for all of these models—be it audio, image generation, text generation—there’s certain elements of it that are just very large and that can be optimized further. So by halving model footprint or the model size by using compression and quantization-related techniques, we’re able to achieve a lot of reduction in terms of the model size, while also ensuring that the performance is very similar.

Then all of this is motivated by a very interesting notion of local development, where you’re starting to see music creators or audio creators looking to move towards their PCs in terms of creating content, as well as working on the cloud. So with that flexibility you’re able to essentially do what you need to do on the cloud in terms of some of your intensive work—like annotating audio data, gathering it, recording it, collaborating with different experts to create a data set that you need. And then you’re able to do some of your workloads on your PC or on your system, where you’re saying, “Okay, now let me generate some interesting pop beats locally on my system and then prototype it in a room.” Right?

So there’s a lot of different use cases for local versus cloud computing. And one of the things that I see with OpenVINO is optimizing these architectures, especially the bigger elements when it comes to memory and model size, but also being able to enable that flexibility between traversing the edge and the cloud and the client.

Christina Cardoza: I always love hearing about these different tools and resources. Because generative AI—this space—it can be intimidating, and developers don’t know exactly where to start or how they can optimize their model. So I think it’s great that they can partner with Intel or use these technologies, and it really simplifies the process and makes it easy for them so they can focus on the use case, and they don’t have to worry about any of the other complications that they may come across.

And I love that you mentioned the OpenVINO Notebooks. We love the OpenVINO Notebook repository, because you guys just provided a wealth of different tutorials, sample codes, information for all of these different things we talk about in the podcast—how developers can get started, experiment with it, and then really create their own real-world business use cases. Where else do you think developers can learn about generative AI for audio, can learn how to develop it, build it?

Ria Cheruvu: Yeah. I think that—and definitely, Christina, I think we’re very excited about being able to advance a lot of the development, but also the short prototypes that you can do to actually take this forward and partner with developers in this space, and also be able to take it further with additional, deeper engagements and efforts such as that.

I think to answer your question about a deeper tech stack, one of the examples that I really love to talk about—and I was able to witness firsthand as part of working through and creating this—is how do you exactly tape some of the tutorials and the workloads that we’re showing in the OpenVINO Notebooks repo, and then turn it into a reality for your use cases?

So, at Intel we partner with Audacity, a tool that is essentially enabling open-source, audio-related editing creation and a couple of other different efforts. It’s really this one-stop Photoshop kind of tool for audio editing. And one of the things that we’ve done is integrated OpenVINO through a plugin that we provide with that platform. So, as part of that what our engineering team did is they took the code in the OpenVINO Notebooks repo from Python, converted it to C++, and then were able to deploy it as part of Audacity.

So now you’re getting even more of that performance and memory improvement, but you’re also having it integrated directly into the same workflow that many different people who are looking to edit and play around with audio are leveraging. So that means that you just highlight a sound bite and then you say “Generate” with OpenVINO, and then it’ll generate the rest of it, and you’re able to compare and contrast.

So, to me, that’s an example of workflow integration, which can eventually, again, be used for artist workflows, all the way to creating synthetic audio for voice production as part of the movie industry; or, again, interactive kiosks as part of the retail industry for being able to communicate back and forth; or patient-practitioner conversations as part of healthcare. So I’d say that that seamless integration into workflows is the next step that Intel is very excited to drive and help collaborate on.

Christina Cardoza: Yeah, that’s a great point. Because beginner developers can leverage some of these notebooks or at least samples to get started and start playing around with generative AI, especially generative AI for audio. But then when they’re ready to take it to the next level, ready to scale, Intel is still there with them, making sure that everything is running smoothly and as easy as possible. They can start continuing on their journey for generative AI.

I know in the beginning we mentioned how a lot of people consider, like, ChatGPT or text-based AI as generative AI, when it really is all of these other different forms associated with it also. So I think probably still early days in this space, and I’m looking forward to the additional opportunities that are going to come. I’m curious, from your perspective, where do you think this space is going in the next year or so? And what is the future of generative AI, especially generative AI for audio? And how do you envision Intel is going to play a role in that future?

Ria Cheruvu: Sure. And I completely agree. I think that it’s blink and you may miss it when it comes to generative AI for audio, even with the growth of the OpenVINO Notebooks repository. As an observer and a contributor, it’s just amazing to see how many workloads around audio have continued to be added in terms of generative AI workloads and some of the interesting elements and ways that we can implement and optimize that.

But I’d say, just looking into the near future, maybe end of year or maybe next year or so, some of the developments that we can start to see that are popping up are definitely those workflows that we think about. Now we have these models and these technologies, and we’re seeing a lot of companies in the industry creating platforms and toolboxes, as they call it, for audio editing and audio generation and some of these elements using generative AI. So I would say that identifying where exactly you want to run these workloads—is it on your local system, or is it on the cloud, or some sort of mix of it?—is definitely something that really interests me, as I mentioned earlier.

And with Intel, some of the things that we are trying are around audio generation on AI PC with the Intel® Core Ultra and similar types of platforms around what can you achieve locally when you’re sitting in a room, prototyping with a bunch of fellow artists for music, and you’re just playing around and trying to do some things. And ideally you’re not exactly having to access the cloud for that, but you’re actually able to do it locally, export it to the cloud, and move your workloads back and forth.

So I’d say that that really is the key of it, which is what exactly is going to happen with generative AI for audio. How do we get it to be incorporating our stakeholders as part of that process—whether we’re, again, generating audio for these avatars—how do we exactly create that, instantiate that, and then maintain it over time? I think that these are a lot of the questions that are going to be coming up in the next year. And I’m excited to be collaborating with our teams at Intel and across the industry to see what we’re going to achieve.

Christina Cardoza: Great. And I love that you mentioned maintaining it over time. Because we want to make sure that anything that we do today is still going to make sense tomorrow. How can we future-proof the developments that we’re doing? And Intel is always leading the way to make sure that developers can plug and play or add new capabilities, make their solutions more intelligent without having to rewrite their entire application. Intel has always been great at partnering with developers and letting them take advantage of the latest innovations and technologies. So I can’t wait to see where else the company takes this.

We are running a little bit out of time. So, before we go, I just want to ask you one last time, Ria, if there’s anything else about this space that we should know, or there’s any takeaways that you want to leave our listeners with today.

Ria Cheruvu: I think one takeaway is exactly rephrasing what you said in terms of there’s a lot of steps towards being able to enable and deploy generative AI. It’s kind of that flashy space right now, but almost everyone sees the value that we can extract out of this if we have that future-proof strategy and that mindset. Definitely couldn’t have phrased it better in terms of our value prop or value add that we want to provide to the industry—is really being able to hold the hands of developers, show you what you can do with the technology and the foundations, and also help you with every step of the way in order to achieve what you want.

But I’d say, based off of everything that we’ve gathered up until now, as I mentioned earlier, generative AI for audio and specifically generative AI in general is just moving so fast. So, keeping an eye on the workloads, evaluating, testing and prototyping is definitely key as we move forward into this new era of audio generation, synthetic generation, and so many more of these exciting domains.

Christina Cardoza: Of course we’ll also be keeping an eye on the work that you’re doing at Intel. I know you often write and publish a lot of blogs on the Intel or OpenVINO media channels and different areas. There’s different edge reference kits that are published online every day. So we’ll continue to keep an eye on this space and the work that you guys are doing.

So, just want to thank you again for joining us on the podcast and for the insightful conversation. And thank you to our listeners for tuning into this episode. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Computer Vision Improves Galvanizing Plant Efficiency

What can computer vision and edge AI bring to a centuries-old industrial process? As galvanizing plant managers are discovering, these innovative technologies help deliver greater operational efficiency, cost savings, improved worker safety, and powerful sustainability benefits.

Galvanization, the process of coating steel and iron with zinc to prevent corrosion, first came into widespread use in the mid-nineteenth century—though the underlying chemistry was described as early as the 1740s. Now, advancements in AI are helping galvanizing businesses transform this old but still vital industrial process.

“Many galvanizing plants today still lack a unified tracking mechanism to monitor productivity,” says Harishankar Durairaj, Head of Technology and Product at SeeWise.AI, a computer vision specialist that offers digital transformation solutions to the galvanizing industry. “Using computer vision helps them to overcome that weakness—and also provides rich data insights that can be used to improve operational efficiency and worker safety.”

One Galvanizing Plant’s Real-World Results

SeeWise AI’s deployment with a large galvanizing business in India is case in point. The galvanizing plant in question is a large-scale operation—but despite their success, the business faced many challenges. It had no unified solution to monitor factory operations in real time. Production status updates had to be obtained by supervisors going out to the floor to check on progress manually. In a large facility, comprising 20 separate tanks for dipping, rinsing, and other stages in the galvanizing process, as well as materials storage and shipping areas, this proved extremely cumbersome. The lack of a centralized monitoring capability also made it hard to identify downtime issues and safety violations.

Working with the company’s management and operations teams, SeeWise.AI developed a comprehensive solution based on its True AI Powered Smart Factory platform to address these gaps. They installed a network of CCTV cameras to acquire a visual data stream from all areas of the plant. Using computer vision at the edge, the system analyzes this data to obtain real-time production data, detect machine downtime, and spot unsafe behavior by employees. The edge deployment helped reduce latency and enhance data security, and SeeWise engineers also took care to mask the biometric data of factory workers to protect their privacy.

The solution is designed to send real-time #automated alerts to supervisors when problems are discovered, such as an idle #machine that might point to a downtime issue. @SeeWiseAI via @insightdottech

The solution is designed to send real-time automated alerts to supervisors when problems are discovered, such as an idle machine that might point to a downtime issue. Alerts are handled through direct integration with operational equipment (such as buzzers or alarms on the factory floor) or via a centralized dashboard or mobile app.

The system also gives the galvanizing business greater insight into its operations. “We trained the AI model to monitor different steps in the production process, such as when a metal beam was immersed in a chemical tank, how long it was there, when it left the tank, and how long it sat waiting for someone to transport it to another area of the facility,” explains Durairaj. “This allowed management to identify bottlenecks and inefficiencies in the production process and improve on them.”

The implementation of the AI-powered solution, running on Intel-powered industrial computers has led to rapid improvements. For one thing, it can detect and correct downtime issues in real time and to ensure worker compliance with safety protocols. In addition, the plant managers use their newfound visibility into the production process to significantly improve operational efficiency. The plant’s overall equipment effectiveness (OEE) increased by 11% within the first three months. By month four, the business had achieved full ROI on the solution.

Making Computer Vision at the Edge More Cost-Effective

The software developed by SeeWise.AI is engineered to be agnostic to the input data source. Put another way, the system simply processes whatever video data it’s sent—without caring where that data stream came from or what brand of camera was used. That’s a significant advantage, because many galvanizing plants already have CCTV coverage for general security and monitoring purposes. Those existing video feeds can easily be repurposed to enable AI-powered productivity management and analytics, greatly reducing the initial capital expenditure costs of implementing a computer vision solution.

In addition, solutions like the Intel® OpenVINO toolkit help software developers more efficiently build their AI models. This offers further cost savings when first deploying a solution, because accurate AI model require less computing resources.

“OpenVINO has helped us tremendously with AI model efficiency and quantization,” says Duraiaj. “The hardware we used to need to support five cameras can now support as many as 10 cameras. That’s a direct result of the benefits offered by OpenVINO.”

Computer Vision in Galvanizing, Industry, and Beyond

The next decade will see a wave of digital transformation in the manufacturing sector—and computer vision technology will play a big role, both in galvanizing and other industries.

The upside is enormous, both in terms of operational efficiency and profitability. And just as importantly, computer vision also has the potential to deliver crucial sustainability benefits.

“If you’re manufacturing a galvanization product, but your process is inefficient, or you’re experiencing a high rate of defects, what happens? You contribute to increasing the global carbon footprint,” says Duraiaj. “Our mission is to make the planet more efficient, and computer vision is a powerful tool to make this happen.”

Computer vision, then, isn’t just helping to bring the technologies of the past into the present. It’s doing so in a way that will help create a more profitable and sustainable future.

Edited by Georganne Benesch, Editorial Director for insight.tech.

Hotel Robots Transform the Guest Experience

Guests arrive at a hotel filled with positive expectations. Awaiting them is a freshly made bed—and maybe a nice meal or a visit to a local attraction. But one experience that no one enjoys is maneuvering luggage into an elevator and hauling it down narrow corridors to their room.

At one forward-thinking hotel in Taiwan—which could be a harbinger of the future—guests don’t need to lift a finger to transport their luggage. A robot does it for them.

In an era when hotels are severely short-staffed and eager to distinguish themselves amid fierce competition, having autonomous mobile robots (AMRs) that haul, store, and fetch guest luggage on command just might be a ticket to success. If so, they could catch on at other venues, transferring suitcases across airports and train stations, or delivering medicine to hospital rooms.

“Created to sort goods in warehouses and #factories, #AMRs increase efficiency and improve #safety by eliminating the need for humans to perform rote manual tasks” – Hoe Seng Ooi, NexAIoT via @insightdottech

Autonomous Mobile Robots Rolled into Hotels

Created to sort goods in warehouses and factories, AMRs increase efficiency and improve safety by eliminating the need for humans to perform rote manual tasks, says Hoe Seng Ooi, Chief Technology Officer at NEXCOM International subsidiary NexAIoT, a developer of IoT solutions. After nearly a decade of providing factories with industrial automation and robotics solutions, NexAIoT turned its attention to the hotel industry, which was struggling to attract and retain labor just as post-pandemic tourism surged. Tweaking and experimenting with its technology, the company hit upon a way to solve the problem of luggage storage and delivery with its NexMOV Smart Hotel Autonomous Mobile Robot.

“By using NexMOV, hotels can streamline operations, optimize staff efficiency, and deliver an unforgettable personalized experience for their guests,” Ooi says. The system swings into action as soon as guests check in, and they can use it at any time during their stay.

At the Taiwan hotel where NexMOV is deployed, no employees work in the lobby. Guests check in at a kiosk, where they receive a QR code for a container the robot will use for transporting their luggage. The container locks automatically after a guest deposits a suitcase, which can then be delivered to their room by a robot or lifted by a robotic arm into a storage area.

Because the hotel is near a tourist-hotspot market that’s open at night, many guests choose the storage option. Being able to set off immediately for the market—or to a restaurant or a night on the town—and receive their bags later is a convenience guests greatly appreciate, Ooi says. They can also use the system to dash to the market after checking out. And guests who arrive early have a secure place to safely store their bags before their room is ready.

Once guests arrive in their room, they use a screen and a voice-based virtual assistant, similar to Amazon’s Alexa, to request their luggage. A NexMOV robot then retrieves the container holding the guest’s suitcase from storage, travels to an elevator bank, and calls for a car. If it arrives with passengers aboard, the robot says, “Please come out and let NexMOV use the lift.” When the car is empty, the NexMOV slides into the close-fitting space and electronically chooses a floor. While a robot is inside, the elevator won’t stop to board other passengers.

After exiting, the NexMOV robot makes its way along hotel corridors to the correct room, where the virtual assistant notifies the guest it has arrived and provides a code for unlocking the container. Its mission completed, the NexMOV retrieves the empty container and navigates to a charging station, where it plugs itself in to be ready for its next job.

Coordinating with Hotel Automation

While NexMOV is simple for guests to use, its underlying technology is complex. Within each Intel® processor-based robot, edge AI and computer vision software serve as its “brains,” maintaining a seamless connection with hotel infrastructure, including the check-in system, the storage area, and the elevators. The Intel® Distribution of OpenVINO toolkit streamlines system development—enabling NexAIoT to bring the hotel AMR to market more quickly.

Robots are pre-programmed with a map of the hotel’s interior, using Lidar and ultrasound for navigation to avoid people and obstacles along the way. An Intel® RealSense computer vision camera mounted on their “head” enables them to detect people inside the elevator. NexAIoT’s software also monitors robot movements to ensure there are no glitches.

Hospitality and Beyond

AMRs don’t just move luggage—they can also entertain people. At the Taiwan hotel, if a guest arrives on their birthday, a NexMOV robot might play a Happy Birthday song or dance to music in the lobby. Robots are also decorated with colorful, cartoon-like graphics, making them a hit with kids.

Advertising the NexMOV has helped the hotel attract business, especially from families. “Competition in the industry is rising, and this allows them to offer something unique,” Ooi says.

While NexAIoT created the birthday feature as part of an all-inclusive package for the hotel, systems integrators could add different customized functions for other properties. “We partner with systems integrators around the world,” Ooi says.

As AI capabilities improve and robots learn from their experience, they will be able to tackle more advanced tasks. Ooi expects to see them vacuuming rooms and cleaning toilets. “Because hotels are so short-staffed, it will happen quickly,” he says.

In the future, robots could be programmed to transport luggage across airports. Train and bus systems could use them to roam through vehicles and spot burnt-out lightbulbs and other maintenance problems. At hospitals, they could distribute equipment and medicines whenever and wherever they’re needed.

“Tasks like these require a lot of human effort, but a robot can do them quite easily,” Ooi says. “We will definitely see more demand for autonomous mobile robots.”

 

Edited by Georganne Benesch, Editorial Director for insight.tech.

Medical Panel PCs Fulfill Hospitals’ Rigorous Requirements

From MRIs and CT scanners to electrocardiograms, oximeters, and blood pressure monitors, today’s medical machines collect an enormous amount of data about patient health. But getting that information into the hands of hospital doctors and nurses who need it is often a struggle.

One problem is that medical machines and devices are made by a wide range of manufacturers. As a result, the information they produce is scattered in incompatible formats across multiple data bases and IT systems, making it hard for doctors and nurses to assemble a coherent picture of patient health.

Another issue is hospitals’ restrictive hardware requirements. Nursing stations, operating rooms, and intensive care units—as well as the labs and pharmacies they connect with—must use computers that meet rigorous hygienic standards. And to successfully transmit vital patient data, the computers must also be very fast, reliable, easy to use, and secure.

Modern medical Panel PCs are designed to meet these challenges. They support data integration from a diverse range of medical equipment via an all-in-one, compact computer that meets hospital sanitary, usability, security, and reliability requirements. And they have the computing power to run the sophisticated medical software that provides caregivers the data they need not only to respond to emergencies but to consistently measure patient progress.

Unifying Patient Monitoring Solutions

Because healthcare technology continuously advances, most hospitals contain a wide range of medical equipment makes and models.

“Hospitals modernize their technology in phases, so they have a very heterogeneous hardware and software environment. Communication protocols are a big pain point,” says Guenter Deisenhofer, Product Manager at Kontron AG, a manufacturer of IoT equipment.

Machines that don’t intercommunicate erode procedural efficiency and make treatment decisions difficult—a problem Deisenhofer recently experienced firsthand after taking his son to a local hospital. First, an intake worker measured the boy’s heart rate, blood pressure, and oxygen level, recording the information on a slip of paper. His son was later seen by a doctor, who took his vital measurements again, writing them on a different paper before sending him off for X-rays—where the process was repeated yet again.

“At the end of the day, there were probably five pieces of paper. They had not continuously monitored his condition and nobody had an overview of it,” Deisenhofer says.

Fortunately, the situation turned out not to be serious. But with vital information arriving from different devices at different times—whether it’s recorded on paper or encoded in incompatible software programs—doctors can never be sure of what they might miss.

Medical edge computers, such as Kontron’s MediClient Panel PC, close the information gap, using a standard protocol to integrate data from machines, wearables, and patient health records. The Panel PC satisfies hospitals’ strict sanitary regulations and is readily accessible to caregivers, even if they’re wearing gloves. It conveys information from patient monitoring machines through wired or wireless connections to hospital communication hubs, such as nursing stations. High-performance Intel® processors enable the monitoring machines’ software to run near-real time analytics on incoming results, helping doctors and nurses see patterns and spot anomalies that may suggest a diagnosis, or point to the need for specific tests.

“With continuous monitoring data available, doctors aren’t just reacting to an emergency. They can see, for example, if the heart rate is dropping and recovering over time. It helps them make better diagnostic and treatment decisions,” Deisenhofer says.

Monitoring can continue after patients are released from the hospital, with wearable devices seamlessly transmitting their information to the MediClient, where it can be integrated with patients’ previous records.

With hospitals increasingly relying on advanced #medical equipment, machine #manufacturers must be vigilant in keeping them updated, both to enable new #IoT functions and to guard against the latest cyberthreats. @Kontron via @insightdottech

Improving Machine Life Cycle Management with Medical Panel PCs

With hospitals increasingly relying on advanced medical equipment, machine manufacturers must be vigilant in keeping them updated, both to enable new IoT functions and to guard against the latest cyberthreats. Making these changes often involves not only software but firmware and sometimes hardware. That means devices like the MediClient PC must also be updated to continue providing hospitals the machines’ vital data.

As technology innovation accelerates, machines that were built to last 10 or 15 years often require several major updates. “Machine life cycle is getting more and more difficult to manage,” Deisenhofer says.

Kontron works closely with medical equipment manufacturers to keep up with planned changes and is often included in early product planning cycles. Because every hardware change requires extensive testing and recertification, close communication saves time. Manufacturers can get their products recertified in one go, instead of having to do so again after making modifications. Kontron also does third-party software installation and electrical testing of manufacturers’ equipment to help them resolve potential problems before release.

Collaboration with Kontron allows manufacturers to deliver upgrades to hospitals sooner—and hospitals can integrate the new capabilities into their medical systems as soon as they are available.

Bringing New Capabilities to the Edge

Working together, OEMs and Panel PC makers can extend the value of monitoring machines as technology advances. The more data the machines collect, the more system builders can improve their AI software, reducing false alarms and pinpointing problems. “For example, you could use AI to create warning scores for recognizing conditions well in advance of a critical situation,” Deisenhofer says.

And the processing speed of edge Panel PCs will improve, helping caregivers respond to identified health threats sooner.

Benefits like these suggest a bright future for patient monitoring machines and the medical Panel PCs that connect their data to doctors and nurses. As Deisenhofer says, “Our device cannot make decisions. But by bringing all the data together at the edge, it can help doctors make the right decisions.”

Edited by Georganne Benesch, Editorial Director for insight.tech.