AI-Powered Medical Imaging Solutions Advance Healthcare

The use of edge AI in medical imaging offers the possibility of enormous benefits to stakeholders throughout the healthcare sector.

On the provider side, edge AI imaging can improve diagnostic accuracy, boost physician efficiency, speed case processing timelines, and reduce the burden on overstretched medical personnel. Patients benefit from shorter wait times for their diagnostic test results and a better overall quality of care.

But it can be challenging to develop AI-powered solutions needed to make this promise a reality. The computing requirements to implement edge AI in medicine are high, which has historically made it both difficult and expensive to obtain adequate computing resources. It can also be hard to customize the underlying hardware components well enough to suit medical imaging use cases.

It’s a frustrating situation for anyone wanting to offer innovative AI-enabled imaging solutions to the medical sector—because while the market demand certainly exists, it’s not easy to build products that are effective, efficient, and profitable all at the same time.

But now independent software vendors (ISVs), original equipment manufacturers (OEMs), and system integrators (SIs) are better positioned to innovate edge AI-enabled medical imaging solutions. The prevalence of rich edge-capable hardware options and the increasing availability of flexible AI solution reference designs make this possible.

AI Bone Density Detection: A Case Study

The AI Reasoning Solution from HY Medical, a developer of computer vision medical imaging systems is a case in point. The company wanted to offer clinicians an AI-enabled tool to proactively screen for possible bone density problems in patients so that timely preventive steps could be taken.

They needed an edge AI deployment that would put the computational work of AI inferencing closer to the imaging devices, thereby reducing network latency and bandwidth usage while ensuring better patient data privacy and system security. But there were challenges.

The edge computing power requirements for a medical imaging application are high due to the complexity of the AI models, need for fast processing times, and sheer amount of visual data to be processed.

In addition, special challenges involved developing an AI solution for use in medical settings: an unusually high demand for stability, the need for waterproof and antimicrobial design elements, and the requirement that medical professionals approve the solution before use.

The solution can automatically measure and analyze a patient’s bone density and tissue composition based on the #CT scan data, making it a valuable screening tool for #physicians. HY Medical (Huiyihuiying) via @insightdottech

HY Medical leveraged Intel’s medical imaging AI reference design and Intel® Arc graphics cards to develop a solution that takes image data from CT scans and then processes it using computer vision algorithms. The solution can automatically measure and analyze a patient’s bone density and tissue composition based on the CT scan data, making it a valuable screening tool for physicians.

The solution also meets the stringent performance requirements of the medical sector. In testing, HY Medical found that their system had an average AI inference calculation time of under 10 seconds.

Intel processors offer a powerful platform for medical edge computing, which allows the company to meet its performance goals with ease. Intel technology also provides tremendous flexibility and stability, enabling the wide-scale application of this technology in bone density screening scenarios.

Reference Designs Speed AI Solution Development

HY Medical’s experience with developing their bone density screening solution is a promising story—and one that will likely become more common thanks to the availability of AI reference designs. These reference architectures make it possible for ISVs, OEMs, and SIs to develop medical imaging solutions for a hungry market both quickly and efficiently.

Intel’s edge AI inferencing reference design for medical imaging applications supports this goal in several ways:

Tight integration with high-performance edge hardware: Ensures that solutions built with the reference design will be optimized for computer vision workloads at the edge. The result is improved real-world performance, better AI model optimization for the underlying hardware, and increased energy efficiency.

Flexible approach to AI algorithms: Because different software developers work with different tools, multiple AI model frameworks are supported. Models written in PyTorch, TensorFlow, ONNX, PaddlePaddle, and other frameworks can all be used without sacrificing compatibility or performance.

AI inferencing optimization: The Intel® OpenVINO toolkit makes it possible to optimize edge AI models for faster and more efficient inferencing performance.

Customized hardware support: The reference design also factors in the special needs of the medical sector that require customized hardware configurations—for example, heat-dissipating architectures, low-noise hardware, and rich I/O ports to enable connection with other devices in clinical settings.

The result of reference architectures such as this one is that they shorten time-to-market and reduce the inherent risks of the product development phase, giving innovators a clear path to rapid, performant, and profitable solution development. That’s a win for everyone involved—from solutions developers and hospital administrators to frontline medical professionals and their patients.

The Future of AI in Medical Imaging

The ability to develop innovative, tailored solutions quickly and cost-effectively makes it likely that far more AI-enabled medical imaging solutions will emerge in the coming years. The potential impact is huge, because medical imaging covers a lot of territory—from routine screenings, preventive care, and diagnosis to support for physicians treating diseases or involved in medical research.

Hospitals will be able to use this technology to improve their medical image reading capabilities significantly while reducing the burden on doctors and other medical staff. The application of edge AI to medical imaging represents a major step forward for the digital transformation of healthcare.

 

Edited by Georganne Benesch, Editorial Director for insight.tech.

5G Private Networks Close the Connectivity Gap

In almost every sector, there is a push for digital transformation—from manufacturing, to healthcare, smart cities, and beyond. Fast, reliable data transfer at the network edge is essential to these efforts. But high-capacity networking is challenging in large, widespread environments or remote operating areas, where traditional wired and wireless network solutions fall short.

“Wi-Fi networks leave coverage gaps and cause latency issues,” says Raymond Pao, Senior VP of Business Solutions at HTC, a provider of connected technology, virtual reality, and 5G networking solutions. “And while commercial 5G networks undeniably offer excellent speed and bandwidth, in many cases they aren’t a viable alternative due to the need for dedicated connections, locality, or security concerns.”

The good news is that private 5G networks deliver high-bandwidth, low-latency connectivity in such scenarios. They offer dedicated, customizable, secure, and performant networks that enable a wide range of digital transformation applications in challenging edge environments. And now, private 5G solutions based on open software and networking standards can help companies deploy applications faster.

Private #5G solutions based on open #software and #networking standards can help companies deploy applications faster. @htc via @insightdottech

Private 5G Enables Factory AGV Solution

HTC’s deployment at a factory in Taiwan is a case in point. A maker of high-end digital displays wanted to implement autonomous guided vehicles (AGVs) in their manufacturing facility. But the proposed solution required seamless network connectivity over a large working area.

The company explored the possibility of using multiple Wi-Fi routers to build a network large enough to cover the entire factory floor. But this approach was ruled out because latency issues would often cause AGVs to stop during handoff between access points. In addition, the Wi-Fi network was not always reliable, leading to concerns over downtime.

Working with the manufacturer, HTC set up a dedicated 5G network to deliver the high-capacity, high-performance connectivity needed to run the AGV solution. Post-deployment, the manufacturer found that the network more than met their needs—and led to significant cost savings as well.

“The integration of AGVs and private 5G networking provides the real-time data needed to improve decision-making and streamline the flow of materials within the factory,” says Pao. “Because of this, our client improved its operational efficiency and has substantially cut down on labor expenses.”

All-in-One Hardware and a Collaborative Approach

It would be wrong to imply that setting up a 5G network is ever easy—but all-in-one hardware offerings and the collaborative approach of providers like HTC help to simplify the process.

For example, HTC’s Reign Core series, a portable networking system that the company describes as “5G in a box,” provides all the necessary physical infrastructure to implement a private 5G network in a compact, 20kg hand-carry case.

The company also offers extensive support to systems integrators (SIs) and enterprises looking to develop custom 5G-enabled applications. This includes an initial needs assessment, help with building and testing a proof-of-concept system and software applications, and optimizing the solution to scale deployment.

HTC’s 5G Reign Core solution is also compliant with 3rd Generation Partnership Project (3GPP) mobile broadband and O-RAN ALLIANCE standards. This facilitates the incorporation of components from other vendors that build to the same standards, allowing for more flexible solution development and greater customization. For those developing virtual reality (VR) applications based on HTC’s VIVE VR headsets, the company also grants access to their proprietary VIVE Business Streaming (VBS) protocol for optimized data transfer.

The combination of flexible, self-contained infrastructure, extensive engineering support, open standards, and access to proprietary protocols enables businesses and SIs to create a wide range of 5G-powered use cases—from ICT in manufacturing to VR applications in training, design, and entertainment (Video 1).

Video 1. 5G private networks enable VR for manufacturing, training, design, and other use cases. (Source: HTC)

Partner Ecosystem Drives 5G Transformation

Private 5G solutions enable digital transformation across many industries. In large part, this is due to the mature ecosystem of technology partnerships that support them.

HTC’s partnership with Intel is a good example of this. “We use the Intel® FlexRAN reference implementation to handle processing in our baseband unit (BBU),” says Pao. “FlexRAN efficiently implements wireless access workloads powered by Intel® Xeon® Scalable processors, giving us flexible and programmable control of our wireless infrastructure.”

By building within the FlexRAN partner ecosystem, HTC also gains access to a wide network of potential hardware providers, including server and radio unit vendors. This makes it straightforward for the company’s engineers to develop customized solutions when working with SIs, regardless of the vertical they’re selling to.

This is one reason the company foresees potential 5G networking applications in sectors such as logistics, defense, and aerospace—and a far more connected world in the years ahead.

“Digitization is happening in every sector, so wireless communication will become much more important in the future,” says Pao. “For customized use cases that demand secure, high-bandwidth, low-latency connectivity, private 5G is going to be a powerful force for digital transformation.”

 

Edited by Georganne Benesch, Editorial Director for insight.tech.

Semi-Industrial PCs Power Outdoor Digital Signage Solutions

Demand for outdoor digital signage solutions is growing as businesses and brands look for innovative ways to boost visibility and impact.

Outdoor digital signage offers new opportunities to engage and communicate customers in high foot-traffic areas—such as stadiums, golf courses, ski resorts, and even car parks and zoos—regardless of a business’ operating hours. These outdoor displays offer continuous access to product or brand information, point-of-sale services, and customer assistance throughout the year, around the clock.

But despite many benefits, the substantial computing resources required to implement outdoor digital signage makes it inherently difficult to successfully deploy these.

“Outdoor computing can be a challenge due to environmental factors such as extreme temperature ranges, unpredictable weather, and the risk of vandalism and damage,” says Kenny Liu, Senior Product Manager at Giada, an AIoT and edge computing specialist that provides digital signage and embedded computing products to enterprises. “There are also special power, connectivity, and space requirements to consider.”

Due to these challenges, outdoor digital signage solutions cannot operate on traditional embedded PCs. They require use of semi-industrial PCs (IPCs) specifically designed to perform reliably at the edge and in harsh environmental conditions. These powerful, ruggedized computing platforms enable outdoor digital signage and digital kiosks across many different industries—opening up new business opportunities for solutions developers (SDs) and systems integrators (SIs) alike.

Semi-Industrial PCs Offer Rugged Design and Performant Computing

The success of semi-industrial PCs as a platform for outdoor digital signage solutions comes from the way that they combine ruggedization with high-performance computing.

These powerful, ruggedized computing platforms enable outdoor #DigitalSignage and digital #kiosks across many different industries—opening up new business opportunities. @Giadatech via @insightdottech

Giada’s AE613 semi-industrial PC, for example, offers several design features to support outdoor use cases:

  • Wide operating temperature range makes the computer suitable for almost any geography.
  • Flexible power input voltage helps to ensure a constant power flow.
  • Rugged, fanless design offers maximum durability, reliability, and space efficiency.

But “underneath the hood,” the AE613 also provides a high-performance computing platform for outdoor digital implementations. Powered by 13th Gen Intel® Core processors, the semi-IPC supports 8K resolution for high-quality visuals and multimedia applications. It can also handle the heavier processing workloads required to offer users an interactive experience (Video 1).

Video 1. Rugged embedded PCs enable high-quality displays and interactive digital solutions in challenging outdoor settings. (Source: Giada)

Unleashing Computer Vision at the Edge

Semi-industrial PCs are meant to support an extensive variety of solutions and applications, so they are built for easy integration with other components and peripherals. This adaptability, combined with high-performance processing capabilities, can be used to implement advanced computer vision use cases at the edge.

Giada’s computing platform, for instance, comes with multiple I/O options to allow for external device connections, including high-definition cameras. The semi-IPC can run specialized computer vision algorithms and software to process and analyze visual data captured by cameras and respond accordingly based on user behavior and characteristics.

This opens the door to solutions that offer real-time analytics, biometrics, behavioral detection, and more. For example, an SD could use biometrics to securely authenticate users of a smart kiosk—or show them personalized content and advertisements. An outdoor digital display could leverage computer vision to detect customer behavior and respond to it in real time, providing an intelligent, interactive, and personalized signage system.

Giada’s partnership with Intel is essential to supporting these sophisticated edge use cases. “Intel® processors deliver superior processing at the edge while also minimizing energy consumption, enabling users to handle demanding edge tasks and applications efficiently,” says Linda Liu, Vice President of Giada. “Our partnership allows us to leverage Intel’s industry-leading expertise in processor technology and innovation—and meet or exceed the expectations of our customers for reliability, power efficiency, and overall performance.”

Future Opportunities for Outdoor Digital Signage Solutions

The adaptability of semi-industrial PCs means that they will find a wide range of use cases. This is especially good news for SIs and SDs, who will have opportunities to sell to many different sectors with a need for outdoor digital signage and kiosks in hospitality, food and beverage, retail, entertainment, transportation, smart cities, and more.

Giada expects demand for outdoor digital solutions to increase in the years ahead, and already is preparing for what’s to come.

“We’re planning to release even more embedded computing products for outdoor digital signage and outdoor digital kiosks,” says Linda Liu. “Our engineers will support customers as they choose the best embedded computers for outdoor applications, and in some cases help them build custom solutions to meet their unique needs.”

Digital transformation reshapes nearly every industry. Integration of semi-industrial PCs, edge computing, and computer vision technology will help ensure that benefits of innovation are accessible everywhere.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Reimagining Supply Chain Management from Edge to Cloud

Today’s manufacturers have embraced digital supply chain management solutions like enterprise resource planning (ERP) software, manufacturing execution systems (MES), and warehouse management systems (WMS), which solutions have increased efficiency and saved them time and money. But some serious challenges remain.

For one thing, the digital supply chain management technologies used by manufacturers are often difficult to integrate, resulting in fragmented solutions. Moving critical data from one system to another—and then turning that information into manufacturing plans and schedules—often relies on inefficient, time-consuming manual processes.

It’s also hard to obtain real-time data from production facilities—a crucial piece of the puzzle for managing and optimizing supply chain management. “Gathering data from the factory floor is notoriously difficult due to the high computing requirements,” says Kun Huang, CEO at Shanghai Bugong Software Co., Ltd., a software company offering a manufacturing supply chain management solution.

Recent advances in edge computing help companies like Bugong deliver comprehensive edge-to-cloud supply chain management solutions for manufacturers—and early results have been extremely promising.

The key to integrated digital #SupplyChain management is an edge-to-cloud #architecture. Shanghai Bugong Software Co., LTD via @insightdottech

Supply Chain Management from Edge to Cloud

The key to integrated digital supply chain management is an edge-to-cloud architecture. This requires both computing capacity at the edge and data pipelines to move information around—either between internal systems or to the cloud for further processing.

Bugong’s solution is a good example of how this works in practice. At the edge, industrial computers gather real-time production data and deploy intelligent production scheduling systems. These devices help manufacturers forecast capacity, optimize processes, implement lean management, and respond to unforeseen order changes or production issues immediately.

Industrial data systems like ERP, MES, and WMS are then linked together through a kind of digital pipeline—and are joined to the cloud as well. This facilitates free flow of data, both between internal systems and to powerful cloud processing software. In addition, it eliminates cumbersome manual data transfer processes.

In the cloud, a dashboard unifies supply chain data for visibility, analytics, automation, and decision-making. This enables round-the-clock monitoring, automated alerting when an unexpected event occurs, and rapid responses to emergencies or production changes.

A system this complex, especially when it is deployed across multiple sites within a manufacturing business, will naturally require significant edge processing power as well as flexible implementation options. Intel technology was crucial in bringing Bugong’s solution to market. “Intel processors provide a reliable, versatile, and high-performance platform for edge computing,” says Kun Huang. “We found them an ideal foundation upon which to build a supply chain management solution.”

Supply Chain Management Solutions Deliver Real-World Results

The architectural details of edge-to-cloud supply chain management solutions may seem a bit abstract. But their integration, automation, and real-time response capabilities confer a wide range of practical benefits.

On the level of day-to-day operations, managers can calculate supply chain capacity and customer demand—and plan accordingly. That allows factories to commit to rational delivery times and make data-driven decisions about expedited delivery requests or on-the-fly change orders. They can also constantly monitor production status for potential problems or delays. The result is improved on-time order metrics, fewer missed opportunities, and happier customers.

For medium-term planning, comprehensive supply chain management solutions offer valuable data analytics and decision-making support. Plant managers and logistics teams can formulate optimized logistics and purchasing plans to streamline shipping and improve procurement of materials—reducing transportation costs and helping to avoid supply interruptions and inventory backlogs.

And as a long-term planning aid, these systems can be used by business decision-makers to gauge overall supply capacity, determine when production capacity needs to be scaled up, and decide if measures such as outsourcing are warranted.

Bugong has implemented its solution in several manufacturing enterprises, and the results have been impressive. Company officials estimate that the integration of different data systems alone can reduce interdepartmental communication costs by up to 50%. The technology also appears to scale well in real-world scenarios. In one large-scale deployment, Bugong set up a collaborative supply chain planning solution for a production line with more than 300 machines and 1,000 complex production processes involving more than 100,000 component parts. The system was robust enough to cope with the attendant heavy computational and data transfer requirements. Bugong reports that it was capable of processing around 5,000 orders—and all of their associated data—in less than 10 seconds.

A Blueprint for Industrywide Success

Software-based systems like the one developed by Bugong are inherently adaptable and flexible. Unlike so-called “lighthouse” smart factories, which are built primarily for demonstration purposes or to suit narrow use cases, these solutions are created to be copied and implemented by others. This opens the door for OEMs, solutions providers, and systems integrators to develop custom solutions that meet their customers’ needs—and to get new products and service offerings to market faster.

The prevalence of ERP, MES, and similar systems demonstrates that manufacturers value the efficiency enhancements that digital solutions can provide. Integrated, edge-to-cloud systems offer a whole new level of advanced management capabilities that will prove attractive to business decision-makers in the industry.

“Comprehensive digital supply chain management is a huge win for everyone in the industry,” says Kun Huang. “These solutions will help enterprises complete the digital transformation of their production processes, create new business opportunities for OEMs and SIs, and improve the efficiency and profitability of the manufacturing sector as a whole.”

 

This article was edited by Teresa Meek, Editorial Director for insight.tech.

AI-Powered Spaces That Work for Your Business: With Q-SYS

Struggling to keep your hybrid workforce engaged and productive? Enter high-impact spaces, driven by the transformative power of AI and changing the way we work and interact in both physical and digital spaces.

In this episode we dive into the exciting possibilities of high-impact spaces, exploring their potential alongside the technology, tools, and partnerships making them a reality.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guest: Q-SYS

Our guest this episode is Christopher Jaynes, Senior Vice President of Software Technologies at Q-SYS, a division of the audio, video, and control platform provider QSC. At Q-SYS, Christopher leads the company’s software engineering as well as advanced research and technologies in the AI, ML, cloud, and data space.

Podcast Topics

Christopher answers our questions about:

  • 2:19 – High-impact spaces versus traditional spaces
  • 4:34 – How AI transforms hybrid environments
  • 10:02 – Various business opportunities
  • 12:59 – Considering the human element
  • 16:23 – Necessary technology and infrastructure
  • 19:24 – Leveraging different partnerships
  • 21:10 – Future evolutions of high-impact spaces

Related Content

To learn more about high-impact spaces, read High-Impact Spaces Say “Hello!” to the Hybrid Workforce. For the latest innovations from Q-SYS, follow them on X/Twitter at @QSYS_AVC and LinkedIn at QSC.

Transcript

Christina Cardoza: Hello and welcome to the IoT Chat, where we explore the latest technology trends and innovations. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re diving into the world of high-impact AI-powered spaces with Christopher Jaynes from Q-SYS.

But before we get started, Chris, what can you tell our listeners about yourself and what you do at Q-SYS?

Christopher Jaynes: Yeah, well, thanks for having me here. This is exciting. I can’t wait to talk about some of these topics, so it’ll be good. I’m the Senior Vice President for Software and Technology at Q-SYS. So, we’re a company that enables video and audio and control systems for your physical spaces. But in reality I’m kind of a classical computer scientist. I was trained as an undergrad in computer science, got interested in robotics, followed a path into AI fairly quickly, did my PhD at University of Massachusetts.

And I got really interested in how AI and the specialized applications that were starting to emerge around the year 2000 on our desktops could move into our physical world. So I went on, I left academics and founded a technology company called Mersive in the AV space. Once I sold that company, I started to think about how AI and some of the real massive leaps around LLMs and things were starting to impact us.

And that’s when I started having conversations with QSC, got really interested in where they sit—the intersection between the physical world and the computing world—which I think is really, really exciting. And then joined the company as their Senior Vice President. So that’s my background. It’s a circuitous path through a couple different industries, but I’m now here at QSC.

Christina Cardoza: Great. Yeah, can’t wait to learn a little bit more about that. And Q-SYS is a division of QSC, just for our listeners. So I think it’s interesting—you say in the 2000s you were really interested in this, and it’s just interesting to see how much technology has advanced, how long AI has been around, but how it’s hitting mainstream or more adoption today after a couple of decades. So I can’t wait to dig into that a little bit more.

But before we get into that, I wanted to start off the conversation. Let’s just define what we mean by high-impact spaces: what are traditional spaces, and then what do we mean by high-impact spaces?

Christopher Jaynes: Yeah. I mean, fundamentally, I find that term really interesting. I think it’s great. It’s focused on the outcome of the space, right? In the old days we’d talk about a huddle room or a large conference room or a small conference room. Those are physical descriptions of a space—not so exciting. I think what’s way more interesting is what’s the intended impact of that space? What are you designing for? And high-impact spaces, obviously, call out the goal. Let’s have real impact on what we want to do around this human-centered design thing.

So, typically what you find in the modern workplace and in your physical environments now after 2020 is a real laser focus on collaboration, on the support of hybrid work styles, deep teamwork, engagement—all these outcomes are the goal. And then you bring technology and software and design together in that space to enable certain work styles really quickly—quick and interesting.

I’ll tell you one thing that’s really, really compelling for me is that things have changed dramatically. And it’s an easy thing to understand. I am at home in my home office today, right? But I often go into the office. I don’t go into the office to replicate my home office. So, high-impact spaces have gotten a lot of attention from the industry because the intent of a user or somebody to go into their space is to find something that they can’t get at home, which is this more interesting, higher-impact, technology-enabled—things you can do there together with your colleagues like bridge a really exciting collaborative meeting with remote users in a seamless way. I can’t do that here, right?

Christina Cardoza: I think that’s a really important point, especially as people start going back to the office more, or businesses have initiatives to get some more people more back in the office or really increase that hybrid workspace. Employees may be thinking, “Well, why do I have to go into the office when I can just do everything at home?” But it’s a different environment, like you said, a different collaboration that you get. And of course, we’ve had Zoom, and we have whiteboards in the office that give us that collaboration. But it’s how is AI bringing it to the next level or really enhancing what we have today?

Christopher Jaynes: Well, let me first say I think the timing couldn’t be better for some of the breakthroughs we’ve had in AI. I’ve been part of the AI field since 1998, I think, and watching what’s happened—it’s just been super exciting. I mean, I know most of the people here at QSC are just super jazzed about where this all goes—both because it can transform your own company, but what does it do about how we all work together, how we even bring products to market? It’s super, super timely.

If you look at some of the bad news around 2020, there’s some outcomes in learning and employee engagement that we’re all now aware of, right? There’s some studies that have come out that showed: hey, that was not a good time. However, if you look back at the history of the world, whenever something bad like this happens, the outcome typically means we figure it out and improve our workplace. That happened in the cholera epidemic and some of the things that happened way back in the early days.

What’s great now is AI can be brought to bear to solve some of these, what I’d call grand challenges of your space. These are things like: how would I take a remote user and put them on equal footing, literally equal footing from an engagement perspective, from an understanding and learning perspective, from an enablement perspective—how could I put them on an equal footing with people that are together in the room working together on a whiteboard, like you mentioned, or brainstorming around a 3D architectural model. How does all of that get packaged up in a way that I can consume it as a remote user? I want it to be cinematic and engaging and cool.

So if you think about AI in that space, you have to start to think about classes of AI that take—they leverage generative models, like these large language models. But they go a little bit past that into other areas of AI that are also starting to undergo their own transformations. So, these are things like computer vision; real-time audio processing and understanding; control and actuation; so, kinematics and robotics. So what happens, for example, when you take a space and you equip it with as many sensors, vision sensors, as you want? Like 10, 15 cameras—could you then take those cameras and automatically track users that walk into the space, track the user that is the most important at the moment? Like where would a participant’s eyes likely to be tracking a user if they’re in the room versus people out? How do you crop those faces and create an egalitarian view for remote users?

So that’s some work we’re already doing now that was part of what we’re doing with Vision Suite, the Q-SYS Vision Suite. It’s all driven by some very sophisticated template and face tracking, kinesthetic understanding of the person’s pose—all this fun stuff so that we can basically give you the effect of a multi-camera director experience. Somebody is auto-directing that—the AI is doing it—but when you’re remote you can now see it in exciting ways.

Audio AI—so it’s really three pillars, right? Vision AI, audio AI, and control or closed-loop control and understanding. Audio AI obviously can tag speakers and auto-transcribe in a high-impact space—that’s already something that’s here. If you start to dream a little further you can say, what would happen if all those cameras could automatically classify the meeting state? Well, why would I want to do that? Is it a collaborative or brainstorming session? Is it a presentation-oriented meeting?

Well, it turns out maybe I change the audio parameters when it’s a presentation of one to many, versus a collaborative environment for both remote and local users, right? Change the speaker volumes, make sure that people in the back of the room during the presentation can see the text on the screen. So I autoscale, or I choose to turn on the confidence monitors at the back of that space and turn them off when no one’s there to save energy.

Those are things that people used to shy away from in the AV industry because they’re complicated and they take a lot of programming and specialized behaviors and control. You basically take a space that could have cost you $100K and drive it up to $500,000, $600,000 deployments. Not repeatable, not stepable.

We can democratize all that through AI control systems, generative models that summarize your meeting. What would happen, for example, if you walked in, Christina, you walked into a room and you were late, but the AI knew you were going to be late and auto-welcomed you at the door and said, “The meeting’s been going for 10 minutes. There’s six seats at the back of the room. It’s okay, I’ve already emailed you a summary of what’s happened so that you can get back in and be more engaged.” That’s awesome. We should all have that kind of stuff, right? And that’s where I get really excited. It’s that AI not on your desktop, not for personal productivity, but where it interacts with you in the physical world, with you in that physical space.

Christina Cardoza: Yeah, I think we’re already seeing some small examples of that in everyday life. I have an Alexa device that when I ask in the morning to play music or what the weather is, it says, “Oh, good morning, Christina.” And it shows me things on the screen that I would like more personalized than anybody else in my home. So it’s really interesting to see some of these happening already in everyday life.

We’ve been predominantly talking about the business and collaboration in office spaces. I think you started to get into a couple of different environments, because I can imagine this being used in classrooms or lecture halls, stores—other things like that. So can you talk about the other opportunities or different types of businesses that can leverage high-impact spaces outside of that business or office space? If you have any customer examples you want to highlight or use cases you can provide.

Christopher Jaynes: We really operate—I just think about it in the general sense of what your physical and experience will look like. What’s that multi-person user experience going to be when you walk into a hotel lobby? How do you know what’s on the screens? What are the lighting conditions? If you have an impaired speaker at a theme park, how do you know automatically to up the audio levels? Or if somebody’s complaining in a space that says, “This sounds too echoy in here,” how do you use AI audio processing to do echo cancellation on demand?

So that that stuff can happen in entertainment venues; it can happen in hospitality venues. I tend to think more about the educational spaces partly because of my background. But also the enterprise space as well, just because we spend so much time focusing on that and we spend a lot of time in those spaces, right?

So, I want to make one point though: when we think about the use cases, transparency of the technology is always something I’ve been interested in. How transparent can you make the technology for the user? And it’s kind of a design principle that we try to follow. If I walk into a classroom or I walk into a theme park, in both of those spaces if the technology becomes the thing I’m thinking about, it kind of ruins this experience, right?

Like if you think about a classroom where I’m a student and I’m having to ask questions about: “Where’s the link for the slides again?” or, “I can’t see what’s on monitor two because there’s a pillar in the way. Can you go back? I’m confused.” Same thing if I go to a theme park and I want to be entertained and immersed in some amazing new approach to—I’m learning about space, or I’m going through a journey somewhere, and I’m instead thinking about the control system seems slow here, right?

So you need to basically set the bar so high, which I think is fun and interesting. You set the technology bar so high that you can make it transparent and seamless. I mean, when was the last time you watched a sci-fi movie? It was kind of like sci-fi movies now have figured that out, right? All the technology seems almost ghostly and ephemeral. In the 60s it was lots of video people pushing buttons and talking and interacting with their tech because it was cool. That’s not where we want to be. It should be about the human being in those spaces using technology; it makes that experience totally seamless.

Christina Cardoza: Yeah, I absolutely agree. You can have all the greatest technology in the world, but if people can’t use it or if it’s too complicated, it almost becomes useless. And so that was one of my next questions I was going to ask, is when businesses are approaching AI how are they considering the human element in all of this? How are humans going to interact with it, and how do they make sure that it is as non-intrusive as possible?

Christopher Jaynes: Yeah. And the word “intrusive” is awesome, because that does speak to the transparency requirement. But then that does put pressure on companies thinking through their AI policy, because you want to reveal the fact that your experience in the workplace, the theme park, or the hotel are all being enabled by AI. But that should be the end of it. So you’ve got to think through carefully about setting up a clear policy; I think that’s really, really key. Not just about privacy, but also the advantages and value to the end users. So, a statement that says, “This is why we think this is valuable to you.”

So if you’re a large bank or something, and you’re rolling out AI-enabled spaces, you’ve got to be able to articulate why it is valuable to the user. A privacy statement that aligns with your culture, of course, is really key. And then also allow, like I mentioned, allowing users to know when AI is supporting them.

In my experience, though, the one thing I think that’s really interesting is users will go off the rails and get worried—and also they should be, when a company doesn’t clearly link those two things together. And I mean also the vendors. So when we build systems, we should be building systems that support the user from where the data is being collected, right? I mean the obvious example is if I use Uber, then Uber needs to know where I’m located. That’s okay. Because I want them to know that—that’s the value that I’m getting so they can drive a car there, right?

If you do the same in your spaces—like you create a value loop that allows a user as they get up in a meeting and as they walk away, their laptop is left behind. But the AI system can recognize a laptop—that’s a solved problem—and auto-email me because it knows who I am. That’s pretty cool. And say, “Chris, your laptop’s in conference room 106. There’s not another meeting for another 15 minutes. Do you want me to ticket facilities, or you want to just go back and get it?”

That kind of closed-loop AI processing is really valuable, but you need to be thinking through all those steps: identity, de-identification for GDPR—that’s super, super big. And if you have this kind of meeting concierge that’s driving you that’s an AI, you have to think through where that data lives. You’d have to be responsible about privacy policies and scrubbing it. And then if a company is compliant with international privacy standards, make that obvious, right? Make it easy to find, and articulate it cleanly to your users. And then I think adoption rates go way up.

Christina Cardoza: Yeah. We were talking about the sci-fi movies earlier, where you had all the technologies and pushing buttons, and then we have the movies about the AI where it’s taking over. And so people have a misconception of what AI or this technology is really—how it’s being implemented. So, I agree: any policies or any transparency of how it’s supposed to work, how it is working, just makes people more comfortable with it and really increases the level of adoption.

You mentioned a couple of different things that are happening with lighting, or echo cancellation, computer vision. So I’m curious what the backend of implementing all of this looks like—that technology or infrastructure that you need to set up to create high-impact spaces. Is some of this technology and infrastructure already in place? Is it new investments? Is it existing infrastructure you can utilize? What’s going on there?

Christopher Jaynes: Yeah, that’s a great question, yeah. Because I’ve probably thrown out stuff that scares people, and they’re thinking, “Oh my gosh, I need to go tear everything out and restart, building new things.” The good news is, and maybe surprisingly, this sort of wave of technology innovation is mostly focused on software, cloud technologies, edge technologies. So you’re not necessarily having to re-leverage things like sensors, actuators, cameras and audio equipment, speakers and things.

So for me it’s really about—and this is something I’ve been on the soapbox on for a long time—if you can have a set of endpoints—this is one reason I even joined QSC—endpoints, actuators, and connect those through a network—like a commodity, true network, not a specialized network, but the internet, and attach that to the cloud. That to me is the topology that enables us to be really fast moving.

So that’s probably very good news to the traditional AV user or integrator, because once you deploy those hardware endpoints, as long as they’re driven by software the lifecycle for software is much faster. A new piece of hardware comes out once every four or five years. We really can release software two, three times a year, and that has new innovation, new algorithms, new approaches to this stuff.

So if you really think about those three pillars: the endpoints—like the cameras, the sensors, all that stuff in the space—connected to an edge or control processor over the network, and then that thing talking to the cloud—that’s what you need to get on this sort of train and ride it to software future because now I can deliver software into the space.

You can use the cloud for deeper AI reasoning and problem-solving for inference-generation. Analytics—which we haven’t talked about much yet—can happen there as well. So, insights about how your users are experiencing the technology can happen there. Real-time processing happens on that edge component for low latency, echo cancellation, driving control for the pan tilts—so the cameras in the space—and then the sensors are already there and deployed. So that, to me, is those three pieces.

Christina Cardoza: And I know the company recently acquired Seervision—and insight.tech and the IoT Chat also are sponsored by Intel—so I imagine you’re leveraging a lot of partnerships and collaborations to really make some of this, like the real-time analytics, happen—those insights be able to make better decisions or to implement some of these things.

So, wanted to talk a little bit more about this: the importance of your partnership with Intel, or acquiring companies like Seervision to really advance this domain and make high-impact spaces happen.

Christopher Jaynes: Oh, that’s an awesome question. Yeah, I should mention that QSC, the Q-SYS product, the Q-SYS architecture, and even the vision behind it was to leverage commodity compute to then build software for things that people at the time when it was launched thought, “No, you can’t do that. You need to go build a specialized FPGA or something custom to do real-time audio, video, and control processing in the space.” So really the roots of Q-SYS itself are built on the power of Intel processing, really, which was at the time very new.

Now I’m a computer scientist, so for me that’s like, okay, normal. But it took a while for the industry to move out of that—the habit of building very, very custom hardware with almost no software on it. With Intel processors we’re able to build—be flexible and build AV processing. Even AI algorithms now, with some of the on-chip computing stuff that’s happening, can be leveraged with Intel.

So that’s really, really cool. It’s exciting for us for sure, and it’s a great partnership. So we try to align our roadmaps together, especially when we have opportunity to do so, so that we’re able to look ahead and then deliver the right software on those platforms.

Christina Cardoza: Looking ahead at some of this stuff, I wanted to see, because things are changing rapidly every day now—I mean, when you probably first got into this in 1998 and back in the 2000s, things that we have today were only a dream back then, and now it’s reality. And it’s not only reality, but it’s becoming smarter and more intelligent every day. So how do you think the future of high-impact spaces is going to evolve or change over the next couple of years?

Christopher Jaynes: I feel like you’re going to find that there is a new employee that follows you around and supports your day, regardless of where you happen to be as you enter and leave spaces. And those spaces will be supported by this new employee that’s really the AI concierge for those spaces. So that’s going to happen faster than most people, I think, even realize.

There’s already been an AI that’s starting to show up behind the scenes that people don’t really see, right? It’s about real-time echo canceling or sound separation—audio-scene recognition’s a great one, right? That’s already almost here. There’s some technologies and some startups that have brought that to bear using LLM technologies and multi-modal stuff that’ll make its appearance in a really big way.

And the reason I say that is it’ll inform recognition in such a powerful way that not only will cameras recognize what a room state is, but the audio scene will help inform that as well. So once you get to that you can imagine that now you can drive all kinds of really cool end-user experiences. I’ll try not to speculate too much, because some of them we’re working on and they’ll only show up in our whisper suites until we actually announce them. But imagine the ability to drive to your workplace on a Tuesday, get out of your car, and then get an alert that says, “Hey, two of your colleagues are on campus today, and one of them is going to hold the meeting on the third floor. I know you don’t like that floor because of the lighting conditions, but I’ve gone ahead and put in a support ticket, and it’s likely going to be fixed for you before you get there.”

So there’s this like, in a way you can think about the old days of your spaces as being very reactive or even ignored, right? If something doesn’t work for me or I arrive late—like my example I gave you earlier of a class—it’s very passive. There’s no “you” in that picture; it’s really about the space and the technology. What AI’s going to allow us to do is have you enter the picture and get what you need out of those spaces and really flip it so that those technologies are supporting your needs almost in real time in a closed-loop fashion.

I keep saying that “closed loop.” What I mean is, the sensing that happened and has happened—maybe it’s even patterns from the last six, seven months—will drive your experience in real time as you walk into the room or as you walk into a casino or you’re looking for your hotel space. So I think there’s a lot of thinking going into that now, and it’s going to really make our spaces far more valuable for far less—way more effective for a far less cost, really, because it’s software-driven, like I mentioned before.

Christina Cardoza: Yeah, I think that’s really exciting. I’m seeing that employee follow around a little bit in the virtual space when I log into a Zoom or a Teams meeting; the project manager always has their AI assistant already there that’s taking notes, transcribing it, and doing bullet points of the most important things. And that’s just on a virtual meeting. So I can’t wait to see how this plays out in physical spaces where you don’t have to necessarily integrate it yourself: it’s just seamless, and it’s just happening and providing so much value to you in your everyday life. So, can’t wait to see what else happens—especially from Q-SYS and QSC, how you guys are going to continue to innovate from this space.

But before we go, just want to throw it back to you one last time. Any final thoughts or key takeaways you want to leave our listeners with today?

Christopher Jaynes: Well, first let me just say thanks a lot for hosting today; it’s been fun. Those are some really good questions. I hope that you found the dialogue to be good. I guess the last thought I’d say is, don’t be shy. This is going to happen; it’s already happening. AI is going to change things, but so did the personal computer. So did mobility and the cell phone. It changed the way we interact with one another, the way we cognate even, the way we think about things, the way we collaborate. The same thing’s happening again with AI.

It’ll be transformative for sure, so have fun with it. Be cautious around the privacy and the policy stuff we talked about a little bit there. You’ve got to be aware of what’s happening, and really I think people like me, our job in this industry is to dream big at this moment and then drive solutions, make it an opportunity, move it to a positive place. So it’s going to be great. I’m excited. We are all excited here at Q-SYS to deliver this kind of value.

Christina Cardoza: Absolutely. Well, thank you again for joining us on the podcast and for the insightful conversation. Thanks to our listeners for tuning into this episode. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Image Segmentation: Exploring the Power of Segment Anything

Innovation in technology is an amazing thing, and these days it seems to be moving faster than ever. (Though never quite fast enough that we stop saying, “If only I had had this tool, then how much time and effort I would have saved!”) This is particularly the case with AI and computer vision, which transform operations across industries and become incredibly valuable for many kinds of business. And in the whole AI/computer vision puzzle, one crucial piece is image segmentation.

Paula Ramos, AI Evangelist at Intel, explores this rapidly changing topic with us. She discusses image-segmentation solutions past, present, and future; dives deep into the recently released SAM (Segment Anything Model) from Meta AI (Video 1); and explains how resources available from the Intel OpenVINO toolkit can make SAM even better.

Video 1. Paula Ramos, AI Evangelist at Intel, discusses recent advancements powering the future of image segmentation. (Source: insight.tech)

What is the importance of image segmentation to computer vision?

There are multiple computer vision tasks, and I think that image segmentation is the most important one. It plays a crucial role there in object detection, recognition, and analysis. Maybe the question is: Why is it so important? And the answer is very simple: Image segmentation helps to isolate individual objects from the background or from other objects. We can localize important information with image segmentation; we can create metrics around specific objects; and we can extract features that can help in understanding one specific scenario—all really, really important to computer vision.

What challenges have developers faced building image-segmentation solutions in the past?

When I was working with image segmentation in my doctoral thesis, I was working in agriculture. I faced a lot of challenges with it because there were multiple techniques for segmenting objects—thresholding, edge detection, region growing—but no one-size-fits-all approach. And depending on which technique you are using, you need to carefully define the best approach.

My work was in detecting coffee beans, and coffee beans are so similar, are so close together! Maybe there were also red colors in the background that would be a problem. So there was over-segmentation—merging objects—happening when I was running my image-segmentation algorithm. Or there was under-segmentation, and I was missing some of the fruits.

That is the challenge with data especially when it comes to image segmentation, because it is difficult to work in an environment where the light is changing, where you have different kinds of camera resolution. Basically, you are moving the camera, so you get some blurry images or you get some noise in the images. Detecting boundaries is also challenging. Another challenge for traditional image segmentation is the scalability and the efficiency. Depending on the resolution of the images or how large the data sets are, the computational cost will be higher, and that can limit the real-time application.

And in most cases, you need to have human intervention to use these traditional methods. I could have saved a lot of time in the past if I had had the newest technologies in image segmentation then.

What is the value of Meta AI’s Segment Anything Model (SAM) when it comes to these challenges?

I would have especially liked to have had the Segment Anything Model seven years ago! Basically, SAM improves the performance on complex data sets. So those problems with noise, blurry images, low contrast—those are things that are in the past, with SAM.

Another good thing SAM has is versatility and prompt-based control. Unlike traditional methods, which require specific techniques for different scenarios, SAM has this versatility that allows users to specify what they want to segment through prompts. And prompts could be point, boxes, or even natural language description.

“Image segmentation is one of the most important #ComputerVision tasks. It plays a crucial role in object detection, recognition, and analysis,” – Paula Ramos, @intel via @insightdottech

I would love to have been able to say, in the past, “I want to see just mature coffee beans” or “I want to see just immature coffee beans,” and to have had that flexibility. That flexibility can also empower developers to handle diverse segmentation tasks. I also mentioned scalability and efficiency earlier: With SAM the information can be processed faster than with the traditional methods. So these real-time applications can be made more sustainable, and the accuracy is also higher.

For sure, there are some limitations, so we need to balance that, but for sure we are also improving the performance on those complexities.

What are the business opportunities with the Segment Anything Model?

The Segment Anything Model presents several potential business opportunities across all different image-segmentation processes that we know at this point. For example, creating content or editing content in an easy way, automatically manipulating emails, or creating real-time special effects. Augmented reality or virtual reality is also a field that is heavily impacted by SAM, with the real-time object detection enabling the virtual elements in the interactive experience.

Another thing is maybe product segmentation in retail. SAM can automatically segment product images in online stores, enabling more efficient product sales. Categorization based on specific object features is another possible area. I can also see potential in robotics and automation to achieve more precise object identification and manipulation in various tasks. And autonomous vehicles, for sure. SAM also has the potential to assist medical professionals in tasks like tumor segmentation or making more accurate diagnoses—though I can see that there may be a lot of reservations around this usage.

And I don’t want to say that those businesses will be solved with SAM; it is a potential application. SAM is still under development, and we are still improving it.

How can developers overcome the limitations of SAM with OpenVINO?

I think one of the good things right now in all these AI trends is that so many models are open source, and this is also a capability that we have with SAM. OpenVINO is also open source, and developers can access this toolkit very easily. Every day we put multiple AI trends in the OpenVINO Notebooks repository—something happens in the AI field, and two or three days after that we have the notebook there. And good news for developers: We already have optimization pipelines for SAM in the OpenVINO repository.

We have a series of four notebooks there right now. The first one we have is the Segment Anything Model that we have been talking about; this is the most common one. You can compile the model and use OpenVINO directly, and also you can optimize the model using the neural network compression framework—NNCF.

Second, we have the Fast Segment Anything Model. The original SAM is a heavy transformer model that requires a lot of computational resources. We can solve the problem with quantization, for sure, but FastSAM decouples the Segment Anything task into two sequential stages using YOLOv8.

We then have EfficientSAM, a lightweight SAM model that exhibits the SAM performance with largely reduced complexity. And the last resource, which was just posted in the OpenVINO repository recently is GroundingDINO plus Sam, called GroundedSAM. The idea is to find the bounding boxes and at the same time segment everything in those bounding boxes.

And the really good thing is that you don’t need to have a specific machine to run these notebooks; you can run them on your laptop and see the potential to have image segmentation with some models right there.

How will OpenVINO continue to evolve as SAM and AI evolve?

I think that OpenVINO is a great tool for reducing the complexity of building deep learning applications. If you have expertise in AI, it’s a great place to learn more about AI trends and also to understand how OpenVINO can improve your day-to-day routine. But if you are a new developer, or if you are a developer but you are no expert in AI, it’s a great starting point as well because you can see the examples that we have there and you can follow up every single cell in the Jupyter Notebooks.

So for sure we’ll continue creating more examples and more OpenVINO notebooks. We have a talented group of engineers working on that. We are also trying to create meaningful examples—proofs of concept that can be utilized day to day.

Another thing is that last December the AI PC was launched. I think that this is a great opportunity to understand capabilities that we are enhancing every day—improving the hardware that developers utilize so that they don’t need to have any specific hardware to run the latest AI trends. It is possible to run models on your laptop and also improve your performance.

I was a beginner developer myself some years ago, and I think for me it was really important to understand how AI was moving at that moment, to understand the gaps in the industry, stay one step ahead of the curve, improve, and to try to create new things.

And something else that I think is important for people to understand is that we are looking for what your need is: What are the kinds of things you want to do? We are open to contributions. Take a look at the OpenVINO Notebooks repository and see how you can contribute to it.

Related Content

To learn more about image segmentation, listen to Upleveling Image Segmentation with Segment Anything and read Segment Anything Model—Versatile by Itself and Faster by OpenVINO. For the latest innovations from Intel, follow them on X at @Intel and on LinkedIn.

 

This article was edited by Erin Noble, copy editor.

“Bank on Wheels” and Edge Computing Serve Rural Communities

Think about the last time you withdrew money from an ATM, used a line of credit, or made a deposit. Most of us take these essential financial services for granted. But millions of rural citizens worldwide don’t have a bank account, and when they do, the convenience of branch banking is not available.

“It’s not cost-effective for banks to open new branches in remote locations,” says Amit Jain, Managing Director of Bits & Bytes, a smart kiosk and digital signage specialist. “And banks in rural areas often face infrastructure problems like power cuts and network outages.”

When people can’t get to the bank, it’s not merely an inconvenience. It’s also an issue of equity when citizens can’t fully participate in the wider economy. But a new kind of edge solution addresses this problem in a surprising way: bringing the bank branch to the people.

Powered by edge computing hardware and telecommunications networks, Bits & Bytes developed a “Bank on Wheels” that is already improving access to financial services in remote communities in India, and is poised to enter other markets around the world.

Rural Branch Banking in Action

A Bits & Bytes mobile branch deployment in India is an excellent example of how these solutions can help. Maharashtra state is one of India’s most populated and heavily industrialized regions. But more than 50% of the state’s population lives in rural areas, leaving many citizens without access to the services their urban counterparts enjoy.

Working with a large national bank, Bits & Bytes developed a solution that can perform many functions of a traditional branch and can be driven from location to location as needed.

The heart of the system is a #digital kiosk that runs on rugged, edge-friendly computing #hardware. Bits & Bytes via @insightdottech

The heart of the system is a digital kiosk that runs on rugged, edge-friendly computing hardware and has a built-in camera and fingerprint scanner for biometric authentication and a touchpad for user interaction. The kiosk is installed in a van that can be driven to different rural areas and parked as long as needed.

The system is connected to the bank in two ways. A data card allows it to communicate with the institution’s centralized server via standard cellular networks—and a bank employee can ride along with the driver to help new customers learn how to use the technology and answer questions.

The mobile kiosk helps customers open a new account, obtain a debit card, and perform transactions like cash withdrawals, deposits, loan applications, bill payments, and transfers.

After deployment, the bank on wheels is a resounding success with customers. “Before, some of these people had to pay specialized agents a fee to travel to the nearest branch in person and perform transactions for them,” says Jain. “They were delighted to be able to do their own transactions directly for the first time.”

Ensuring Compliance and Security at the Edge

Financial systems have stringent security and compliance requirements that vary from country to country. Flexible design and edge capabilities help overcome these challenges and make it possible to deploy the solution in many different markets.

For example, the Bits & Bytes solution complies with India’s strict “know-your-customer” laws, using its secure network connection and biometric authentication capabilities. The mobile banking kiosk performs basic biometric scanning and then communicates with a bank server connected to the central government database. After authentication, a pre-filled application form is fetched and needs only to be signed on the touchpad to finish opening the account.

The elegance of the basic design—an edge IPC and modular hardware linked to a central server over a cellular network connection—means that the system immediately becomes a part of the bank’s existing network. This also means that no personal user data is stored at the edge. Everything is kept within the financial institution’s network—with all the data privacy and cybersecurity precautions this implies.

Plus, a mobile branch can easily be adapted to new regions with different data privacy and regulatory requirements. Because those countries’ regulations have already been met by the financial institution, there is no need for extensive customization to the kiosk software.

Bits & Bytes’ technology partnership with Intel is crucial to the solution. “Intel hardware provides an excellent platform for edge computing,” says Jain. “Intel also plays a vital role in product development, helping us to adapt off-the-shelf Intel technologies to bring new products to market.”

Edge Computing Powers Digital Transformation

The ability to solve rural banking shortages and increase the number of customers will likely attract the attention of bank digitization departments and financial industry integrators (SIs).

The rise of edge computing has not only enabled systems like the Bits & Bytes mobile banking kiosks—it also has the potential to tackle tough problems in multiple industries. In the years ahead, expect to see more innovative solutions deployed at the edge, from autonomous mobile robots in agriculture to private 5G networks for mining operations.

The bank on wheels is an excellent example of the current wave of digital transformation at the edge—and AI will open up even more opportunities in the coming years.

“We’re living in an era of rapid technological advancement in nearly every sector—which is why as a company we offer products for so many different verticals,” says Jain. “Five years down the line, when AI and IoT are everywhere, all kinds of people and organizations will be able to enjoy the benefits of digital transformation.”

 

Edited by Georganne Benesch, Editorial Director for insight.tech.

Retail Systems Integrators Deploy AI Solutions with Ease

Today’s retail customers increasingly expect personalization and self-service options, adding new layers of complexity to store technology. To stay ahead of the competition, retailers need sophisticated systems like voice recognition, computer vision, and AI-based scanners. And these systems must communicate seamlessly with one another and with existing machines and merchandise.

Systems integrators (SIs) work with retailers to orchestrate the latest technology, but they must spend countless hours evaluating all the different hardware and software options to build a custom solution for each client.

Now there’s a better way to get it done. Experienced solutions aggregators have tested and deployed many of the cutting-edge technologies available in today’s retail market, and have the know-how to integrate them into complete, end-to-end solutions. By collaborating with aggregators, SIs can save time, better serve their customers’ needs, and gain assurance that the advanced technologies they provide will function as intended.

Retail AI Solutions Delivered Right Out of the Box

For many retailers, automation and self-service technology can’t come soon enough. Stressed by employee shortages, demanding customers, and inflation-squeezed profit margins, they approach SIs to find ways of increasing efficiency, says David Lester, Business Development Manager at BlueStar, Inc., a global supplier of technology solutions for retailers, manufacturers, logistics companies, and other industries.

Specialized technologies have been developed to enhance efficiency and improve the customer experience across the retail spectrum. Working closely with SIs, BlueStar has assembled 30 unique “In-a-Box” solutions for retail operations ranging from quick-serve restaurants to malls, hotels, grocery stores, and boutiques. These ready-to-go bundled packages contain all the hardware, software, and accessories SIs need for deployment, minimizing decision-making and reducing setup time, Lester says.

“If you’re a systems integrator for a quick-serve restaurant, the last thing you want to do is source individual pieces for scanning, payment processing, inventory management, and everything else involved in a point-of-sale system. With a BlueStar In-a-Box solution, you open the box, set it on the counter, and start using it then and there,” Lester adds.

Helping Systems Integrators with AI Automation Technology

One increasingly popular retail technology—especially at drive-through QSRs—is voice-based AI, Lester says. For this use case, BlueStar partners with Sodaclick, a provider of interactive voice technology for digital ordering. “We like Sodaclick conversational voice AI because it is very good at understanding what customers want,” Lester says.

The Sodaclick conversational virtual assistant, used at drive-throughs and kiosks, uses Intel® RealSense 3D cameras to recognize approaching customers, and can be programmed to understand English, Spanish, Mandarin Chinese, and more than 100 other languages and regional accents. The system responds to customers in natural-sounding language and can offer suggestions and promotions based on demographics, time of day, or other metrics that retailers choose.

The combination of voice recognition and computer vision also works well at stores with self-service payment systems, where merchandise recognition can be tricky.

That was the case at the Fayetteville, Georgia-based fully automated grocery store Nourish + Bloom Market. When the store’s item-recognition software failed to properly account for salads and other deli items, the company asked SI UST Global Inc for help. UST worked with BlueStar to upgrade the store’s checkout experience with Sodaclick Conversational Voice AI, the UST Vision Checkout system, and automated payment processors, as well as kiosks, scales, cables, and other associated hardware. 

The combination of voice recognition and #ComputerVision also works well at stores with self-service #payment systems, where merchandise recognition can be tricky. @Think_BlueStar via @insightdottech

Customers can now purchase any items in the store without human assistance. UST Vision Checkout includes ceiling-mounted cameras, to recognize and record the prices of packaged items as soon they are removed from shelves and placed in a shopping cart. For salads and other deli products that must be weighed, the customer describes the item to the Sodaclick voice assistant before placing them on a scale. Coordination between the voice system and computer vision cameras results in accurate pricing. After all items are checked, the customer simply tells the voice assistant “Pay now” and completes the transaction with a cell phone. “It’s a frictionless process and a great convenience for customers,” Lester says.

Building Tomorrow’s Retail Infrastructure

As edge AI capabilities improve, BlueStar is expanding the range of its solutions. For example, it is currently developing integrations with clothing technology company FIT:MATCH, which uses Lidar and AI to capture 3D images of a customer’s body shape and match them to a digital twin in the database. The system can then make individualized recommendations for products and sizing. Listen to our podcast: Personalized AI Shopping Experiences: With FIT:MATCH on insight.tech.

Working with Intel helps BlueStar keep up with innovative applications such as this. “Intel plays a pivotal role with us, especially for our In-a-Box solutions,” Lester says. “They’ve helped us tremendously in learning about AI solutions and deploying them as cost-effectively as possible.”

As futuristic as some of the new AI applications may sound, Lester believes they will continue to improve. “I’m seeing advancements in artificial intelligence every month. I think voice AI and digital signage will evolve to become more intuitive, improving contextual understanding and providing even more personalized experiences and better customer engagement.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Patient-Centered AI Redefines Continuum of Care

Healthcare professionals have a singular mission: provide the best possible care for patients. But from admittance to discharge and everything in between, they face countless challenges.

Persistent staff shortages, constrained resources, and tight budgets are just a few. The greatest challenge is access to essential information about a patient’s condition throughout their hospital journey, specifically the second-by-second time series waveform data generated from the biomedical devices monitoring a patient. When seconds matter, how do hospitals harness this data and make it easily accessible to their healthcare teams?

Why Time Series Data Matters

The answer to this challenge is a single, open platform that continuously collects, processes, and unifies disparate data and presents it to clinicians in real time. Take, for example, an eight-hospital system in Houston that was confronting staffing issues and limited provider coverage—especially overnight. This forced difficult decisions, like hiring more travel nurses and physicians or turning patients away. All that changed when the organization implemented the Sickbay® Clinical platform, a vendor-neutral, software-based monitoring and analytics solution, from Medical Informatics Corp. (MIC).

The platform enables flexible care models and the #development and deployment of patient-centered #AI at scale on a single, interconnected architecture. @sickbayMIC via @insightdottech

Sickbay is an FDA-cleared, software-based clinical platform that can help hospitals standardize patient monitoring. The platform enables flexible care models and the development and deployment of patient-centered AI at scale on a single, interconnected architecture. Sickbay redefines the traditional approach of storage and access to static data contained in EMR systems and PACS imaging. The web-based architecture brings near real-time streaming and standardized retrospective data to care teams wherever they are to support a variety of workflows with the same integration. This includes embedded EMR reporting and monitoring data on PCs and mobile devices.

“Out of about 800,000 data points generated each hour for a single patient from bedside monitoring equipment, only about two dozen data points are available for clinical use,” says Craig Rusin, Chief Product & Innovation Officer and cofounder at MIC. It’s not widely known that alarms from non-networked devices such as ventilators outside of a patient’s room are difficult for staff to hear or view remotely. Similarly, current patient monitoring doesn’t use AI tools with the existing data to inform patient care.

Measuring the Impact

Hospitals and healthcare systems using Sickbay have redefined patient monitoring and have created a new standard of flexible, data-driven care by demonstrating the ability to:

  • Rapidly add bed and staff capacity while creating flexible virtual care models that go beyond traditional tele-sitting, admit, and discharge.
  • Provide more near real-time and retrospective data to staff already on unit, on service, or on call to improve their workflows and delivery of care.
  • Create virtual nursing stations where one nurse can monitor 50+ patients on a single user interface across units and/or facilities.
  • Leverage the same infrastructure to create virtual command centers that monitor patients across the continuum of care.

No matter the method of deployment, Sickbay gives control back to healthcare teams and provides direct benefit back to the hospital. Benefits reported include reduced labor, capital, and annual maintenance costs as well as improved staff, patient, and family satisfaction. Most important, clients using Sickbay see direct impact on improvements in quality of care and outcomes, including reductions in length of stay, code blue events, ICU transfers, time on vent, time for dual sign-off, and time to treat.

Results such as these provide the pathway for other hospitals to rethink patient monitoring and realize the vision of near real-time, patient-centered AI. Healthcare leaders have proven that going back to team-based nursing by adding virtual staff can help reverse the staffing crisis. “This isn’t about taking nurses away from patients. This is about taking some of the tasks and centralizing them,” says Rusin. “There will never be enough nurses, physicians, and respiratory therapists to cover all of the demand required for the foreseeable future. We need to get bedside teams back to bedside care. Flexible, virtual care support makes that a reality.”

Changing the Economics of Care

Sickbay provides the ability to change the economics of monitoring patients and directly impact improvement in quality and outcomes.

The ability to integrate with different devices, regardless of function or brand, is the key. “We have created an environment that allows our healers to get access to data they have never had before and build content on top of that, in an economically viable way that has never been achieved,” Rusin says.

For healthcare providers, having the data available is game-changing, says MIC EVP of Strategic Market Engagement, Heather Hitchcock. As one doctor commented: “In a single minute, I have to process 300 data points. No machine is ever going to make a decision for me, but Sickbay helps me process that data faster so I can make the right decision and save more lives.”

From Scalable Patient Monitoring to Predictive Analytics

Sickbay’s value extends beyond near real-time patient monitoring and virtual care to long-term treatment improvements. Sickbay supports the ability to leverage the same data to develop and deploy predictive analytics to help get ahead of deterioration and risk.

Clients currently and continuously develop analytics on Sickbay. For example, one client integrates 32 near real-time, multimodal risk scores into its virtual care workflow. Another client created a Sickbay algorithm that analyzes data generated by two separate monitoring devices to determine ideal blood pressure levels in patients. “The particular analytic requires the blood pressure waveform from a bedside monitor and a measure of cerebral blood density from a different monitor,” says Rusin.

Saving Lives with Data

Treatment of patients across the care continuum today will lead to improved care tomorrow. To do that, reliable, specific data is the very starting block. Without it, clinicians are left to their best guesses to solve the body’s most urgent care needs without the data-driven decision-making support they desire. That’s slow, costly, unfair to caregivers, and ultimately not providing the best benefit for the patient.

To truly realize a future where treatment is as specific and individual as the person it serves, healthcare must harness patient data in a way that is most impactful—specific, accurate, near real-time, vendor-agnostic, transformable, and instantly accessible. Leveraging the power of time series data empowers healthcare providers to help more people than it ever has before, and more effectively. After all, saving lives is healthcare’s primary mission.

 

Edited by Georganne Benesch, Editorial Director for insight.tech.

Transforming the Factory Floor with Real-Time Analytics

Manufacturers are under a lot of pressure to take advantage of all the intelligent capabilities available to them—technologies like machine vision and AI-driven video analytics. These can be crucial tools to enable everything from defect detection and prevention to worker safety. But very few manufacturers are experts in the AI space, and there are many things to master and many plates to keep in the air—not to mention future-proofing a big technology investment. Those new technologies need to be created to be adaptable and interoperable.

Two people who can speak to these needs are Jonathan Weiss, Chief Revenue Officer at industrial machine vision provider Eigen Innovations; and Aji Anirudhan, Chief Sales and Marketing Officer at AI video analytics company AllGoVision Technologies. They talk about the challenges of implementing Industry 4.0, what manufacturers have to do to take advantage of the data-driven factory, and how AI continues to transform the factory floor (Video 1).

Video 1. Industry experts from AllGoVision and Eigen Innovations discuss the transformative impact of AI in manufacturing. (Source: insight.tech)

How can machine vision and AI address Industry 4.0 challenges?

Jonathan Weiss: All we do is machine vision for quality inspection, and we’re hyper-focused in industrial manufacturing. Traditional vision systems really lend themselves to detecting problems within the production footprint, and they will tell you if the product is good or bad, generally speaking. But then, how do you help people prevent defects, not just tell them they’ve produced one?

And that’s where our software is pretty unique. We don’t just leverage vision systems and cameras and different types of sensors, we also interface directly with process data—historians, OPC UA servers, even direct connections to PLCs at the control-network level. We give people insights into the variables and metrics that actually went into making the part, as well as what went wrong in the process and what kind of variation occurred that resulted in the defect. And a lot of what we do is AI and ML based.

How can video analytics address worker risk in the current industrial environment?

Aji Anirudhan: The primary thing in this industry is asking how you enhance the automation, how you bring in more machines. But people are not going to disappear from the factory floor, which basically means that there is going to be more interaction between the people and the machines there.

The UN has some data that says that companies spend $2,680 billion annually on workplace injuries and damages worldwide. This cost is a key concern for every manufacturer. Traditionally, what they have done is looked at different scenarios in which there were accidents and come up with policies to make sure those accidents don’t happen again.

But that’s not enough to bring these costs down. There could be different reasons why the accidents are happening; a scenario that is otherwise not anticipated can still create a potential accident. So you have to have a real-time mechanism in place that actually makes sure that the accident never happens in the first place.

That means that if a shop-floor employee is supposed to wear a hard hat and doesn’t, it is identified so that frontline managers can take care of it immediately—even if an accident hasn’t happened. The bottom line is: Reducing accidents means reduced insurance costs, and that adds to a company’s top line/bottom line.

In the industrial-manufacturing segment, it’s a combination of different behavioral patterns of people, or different interactions between people and machines or people and vehicles. And what we see in worker-safety requirements is also different between customers: oil and gas has different requirements from what is needed in a pharmaceutical company—the equipment, the protective gear, the safety-plan requirements.

For example, we worked with a company in India where hot metal is part of the production line, and there are instances when it gets spilled. It’s very hazardous, both from a people-safety and from a plant-safety point of view. The company wants it continuously monitored and immediately reported if anything happens. 

Are manufacturers prepared to take on the data-driven factory at this point?

Jonathan Weiss: Manufacturers as a whole are generally on board with the need to digitize, the need to automate. I do think there’s still a lot of education required on the right way to go about a large-scale initiative—where to start; how to ensure program effectiveness and success; and then how to scale that out beyond factories.

In my world, it’s helping industrials overcome the challenges of camera systems being siloed and not communicating with other enterprise systems. Also, not being able to scale those AI models across lines, factories, or even just across machines. That’s where traditional camera systems fail. And at Eigen, we’ve cracked that nut.

“By bringing vision systems and #software tools to factories, we’re enabling them to inspect parts faster” – Johnathan Weiss, @EigenInnovation via @insightdottech

But what Aji and I do is a small piece of a much larger puzzle, and the one common thread in that puzzle is data. That’s how we drive actionable insights or automation, by creating a single source of truth for all production data. Simply put, it’s a single place to put everything—quality data, process data, safety data, field-services-type data, customer data, warranty information, etc. Then you start to create bidirectional connections with various enterprise-grade applications so that ERP knows what quality is looking at, and vice versa.

It’s having that single source of truth, and then having the right strategy and architecture to implement various types of software into that single source of truth for the entire industrial enterprise.

How can manufacturers apply machine vision to factory operations?

Jonathan Weiss: You have to understand first what it is that you’re trying to solve. What is the highest value defect that occurs the most frequently that you would like to mitigate?

In the world of welding it’s often something that the human eye can’t see, and vision systems become very important. You need infrared cameras in complex assembly processes, for example, because a human eye cannot easily see all around the entire geometry of a part to understand if there’s a defect somewhere—or it makes it incredibly challenging to find it.

It’s finding a use case that’s going to provide the most value and then working backwards from there. Then it’s all about selecting technology. I always encourage people to find technology that’s going to be adaptable and scalable, because if all goes well, it’s probably not going to be the only vision system you deploy within the footprint of your plant.

Aji Anirudhan: Most factories are now covered with CCTV cameras for compliance and other needs, and our requirements at AllGoVision easily match with the in/output coming from them. Maybe the position of the camera should be different, or the lighting conditions. Or maybe very specific use cases require a different camera—maybe a thermal camera. But 80% of the time we can reuse existing infrastructure and ride on top of the video feed.

What’s the importance of working with partners like Intel?

Aji Anirudhan: We were one of the first video-analytics vendors to embrace the Intel open-window architecture. We have been using Intel processes from the early versions all the way to Gen4 and Gen5 now, and we’ve seen a significant improvement in our performance. What Intel is doing in terms of making platforms available and suitable for running deep learning-based models is very good for us.

Some of the new enhancements for running those deep learning algorithms—like the integrated GPUs or the new Arc GPUs—we are excited to see how we can use them to make it more effective to run our algorithm. Intel is a key partner with respect to our current strategy and also going forward.

As this AI space continues to evolve, what opportunities are still to come?

Jonathan Weiss: At Eigen, we do a variety of types of inspections. One example is inspecting machines that put specialty coatings on paper. One part of the machine grades the paper as it goes through, and you only have eight seconds to catch a two-and-a-half-millimeter buildup of that coating on the machine or it does about $150,000 worth of damage. And that can happen many, many times throughout the course of a year. It can even happen multiple times throughout the course of a shift.

And when I think about what the future holds, we have eight seconds to detect that buildup and automate an action to prevent equipment failure. We do it in about one second right now, but it’s really exciting to think about when we do it in two-thirds of a second or half a second in the future. 

So I think what’s going to happen is that technology is just going to become even more powerful, and the ways that we use it are going to become more versatile. I see the democratization of a lot of these complex tools gaining traction. And at Eigen, we build our software from the ground up with the intent of letting anybody within the production footprint, with any experience level, be able to build a vision system. That’s really important to us, and it’s really important to our customers.

Although in our world we stay hyper-focused on product quality, there’s also the same idea that Aji mentioned earlier that people aren’t going away. And I think that speaks to a common misconception of AI, that it is going to replace you; it’s going to take your job away. What we see in product quality is actually the exact opposite of that: by bringing vision systems and software tools to factories, we’re enabling them to inspect parts faster. Now they’re able to produce more, which means the company is able to hire more people to produce even more parts.

A lot of my customers say that some of the highest turnover in their plants is in the visual-inspection roles. It can be an uncomfortable job—standing on your feet staring at parts going past you with your head on a swivel for 12 hours straight. And so this may have been almost a vitamin versus a painkiller sort of need, but it’s no longer a vitamin for these businesses. We’re helping to alleviate an organizational pain point, and it’s not just a nice-to-have.

Aji Anirudhan: What is interesting is all the generative AI, and how we can utilize some of those technologies. Large vision models basically look at explaining complex vision or complex scenarios. I’ll give an example: There is an environment where vehicles go but a person is not allowed to go. And the customer says, “Yes, the worker can move on that same path if he’s pushing a trolley.” But how do you define if the person is with a trolley or without a trolley?

So we are looking at new enhancements in technology, like the LVMs, to bring out new use cases. Generative AI technology is going to help us address these use cases in the factory in a much better way in the coming years. But we still have a lot to catch up on. So we are excited about technology; we are excited about the implementation that is going on. We look forward to a much bigger business with various customers worldwide.

Related Content

To learn more about AI-powered manufacturing, listen to AI-Powered Manufacturing: Creating a Data-Driven Factory and read Machine Vision Solutions: Detect and Prevent Defects. For the latest innovations from AllGoVision and Eigen Innovations, follow them on Twitter/X at @AllGoVision and @EigenInnovation, and on LinkedIn at AllGoVision and Eigen Innovations Inc.

 

This article was edited by Erin Noble, copy editor.