Semi-Industrial PCs Power Outdoor Digital Signage Solutions

Demand for outdoor digital signage solutions is growing as businesses and brands look for innovative ways to boost visibility and impact.

Outdoor digital signage offers new opportunities to engage and communicate customers in high foot-traffic areas—such as stadiums, golf courses, ski resorts, and even car parks and zoos—regardless of a business’ operating hours. These outdoor displays offer continuous access to product or brand information, point-of-sale services, and customer assistance throughout the year, around the clock.

But despite many benefits, the substantial computing resources required to implement outdoor digital signage makes it inherently difficult to successfully deploy these.

“Outdoor computing can be a challenge due to environmental factors such as extreme temperature ranges, unpredictable weather, and the risk of vandalism and damage,” says Kenny Liu, Senior Product Manager at Giada, an AIoT and edge computing specialist that provides digital signage and embedded computing products to enterprises. “There are also special power, connectivity, and space requirements to consider.”

Due to these challenges, outdoor digital signage solutions cannot operate on traditional embedded PCs. They require use of semi-industrial PCs (IPCs) specifically designed to perform reliably at the edge and in harsh environmental conditions. These powerful, ruggedized computing platforms enable outdoor digital signage and digital kiosks across many different industries—opening up new business opportunities for solutions developers (SDs) and systems integrators (SIs) alike.

Semi-Industrial PCs Offer Rugged Design and Performant Computing

The success of semi-industrial PCs as a platform for outdoor digital signage solutions comes from the way that they combine ruggedization with high-performance computing.

These powerful, ruggedized computing platforms enable outdoor #DigitalSignage and digital #kiosks across many different industries—opening up new business opportunities. @Giadatech via @insightdottech

Giada’s AE613 semi-industrial PC, for example, offers several design features to support outdoor use cases:

  • Wide operating temperature range makes the computer suitable for almost any geography.
  • Flexible power input voltage helps to ensure a constant power flow.
  • Rugged, fanless design offers maximum durability, reliability, and space efficiency.

But “underneath the hood,” the AE613 also provides a high-performance computing platform for outdoor digital implementations. Powered by 13th Gen Intel® Core processors, the semi-IPC supports 8K resolution for high-quality visuals and multimedia applications. It can also handle the heavier processing workloads required to offer users an interactive experience (Video 1).

Video 1. Rugged embedded PCs enable high-quality displays and interactive digital solutions in challenging outdoor settings. (Source: Giada)

Unleashing Computer Vision at the Edge

Semi-industrial PCs are meant to support an extensive variety of solutions and applications, so they are built for easy integration with other components and peripherals. This adaptability, combined with high-performance processing capabilities, can be used to implement advanced computer vision use cases at the edge.

Giada’s computing platform, for instance, comes with multiple I/O options to allow for external device connections, including high-definition cameras. The semi-IPC can run specialized computer vision algorithms and software to process and analyze visual data captured by cameras and respond accordingly based on user behavior and characteristics.

This opens the door to solutions that offer real-time analytics, biometrics, behavioral detection, and more. For example, an SD could use biometrics to securely authenticate users of a smart kiosk—or show them personalized content and advertisements. An outdoor digital display could leverage computer vision to detect customer behavior and respond to it in real time, providing an intelligent, interactive, and personalized signage system.

Giada’s partnership with Intel is essential to supporting these sophisticated edge use cases. “Intel® processors deliver superior processing at the edge while also minimizing energy consumption, enabling users to handle demanding edge tasks and applications efficiently,” says Linda Liu, Vice President of Giada. “Our partnership allows us to leverage Intel’s industry-leading expertise in processor technology and innovation—and meet or exceed the expectations of our customers for reliability, power efficiency, and overall performance.”

Future Opportunities for Outdoor Digital Signage Solutions

The adaptability of semi-industrial PCs means that they will find a wide range of use cases. This is especially good news for SIs and SDs, who will have opportunities to sell to many different sectors with a need for outdoor digital signage and kiosks in hospitality, food and beverage, retail, entertainment, transportation, smart cities, and more.

Giada expects demand for outdoor digital solutions to increase in the years ahead, and already is preparing for what’s to come.

“We’re planning to release even more embedded computing products for outdoor digital signage and outdoor digital kiosks,” says Linda Liu. “Our engineers will support customers as they choose the best embedded computers for outdoor applications, and in some cases help them build custom solutions to meet their unique needs.”

Digital transformation reshapes nearly every industry. Integration of semi-industrial PCs, edge computing, and computer vision technology will help ensure that benefits of innovation are accessible everywhere.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Reimagining Supply Chain Management from Edge to Cloud

Today’s manufacturers have embraced digital supply chain management solutions like enterprise resource planning (ERP) software, manufacturing execution systems (MES), and warehouse management systems (WMS), which solutions have increased efficiency and saved them time and money. But some serious challenges remain.

For one thing, the digital supply chain management technologies used by manufacturers are often difficult to integrate, resulting in fragmented solutions. Moving critical data from one system to another—and then turning that information into manufacturing plans and schedules—often relies on inefficient, time-consuming manual processes.

It’s also hard to obtain real-time data from production facilities—a crucial piece of the puzzle for managing and optimizing supply chain management. “Gathering data from the factory floor is notoriously difficult due to the high computing requirements,” says Kun Huang, CEO at Shanghai Bugong Software Co., Ltd., a software company offering a manufacturing supply chain management solution.

Recent advances in edge computing help companies like Bugong deliver comprehensive edge-to-cloud supply chain management solutions for manufacturers—and early results have been extremely promising.

The key to integrated digital #SupplyChain management is an edge-to-cloud #architecture. Shanghai Bugong Software Co., LTD via @insightdottech

Supply Chain Management from Edge to Cloud

The key to integrated digital supply chain management is an edge-to-cloud architecture. This requires both computing capacity at the edge and data pipelines to move information around—either between internal systems or to the cloud for further processing.

Bugong’s solution is a good example of how this works in practice. At the edge, industrial computers gather real-time production data and deploy intelligent production scheduling systems. These devices help manufacturers forecast capacity, optimize processes, implement lean management, and respond to unforeseen order changes or production issues immediately.

Industrial data systems like ERP, MES, and WMS are then linked together through a kind of digital pipeline—and are joined to the cloud as well. This facilitates free flow of data, both between internal systems and to powerful cloud processing software. In addition, it eliminates cumbersome manual data transfer processes.

In the cloud, a dashboard unifies supply chain data for visibility, analytics, automation, and decision-making. This enables round-the-clock monitoring, automated alerting when an unexpected event occurs, and rapid responses to emergencies or production changes.

A system this complex, especially when it is deployed across multiple sites within a manufacturing business, will naturally require significant edge processing power as well as flexible implementation options. Intel technology was crucial in bringing Bugong’s solution to market. “Intel processors provide a reliable, versatile, and high-performance platform for edge computing,” says Kun Huang. “We found them an ideal foundation upon which to build a supply chain management solution.”

Supply Chain Management Solutions Deliver Real-World Results

The architectural details of edge-to-cloud supply chain management solutions may seem a bit abstract. But their integration, automation, and real-time response capabilities confer a wide range of practical benefits.

On the level of day-to-day operations, managers can calculate supply chain capacity and customer demand—and plan accordingly. That allows factories to commit to rational delivery times and make data-driven decisions about expedited delivery requests or on-the-fly change orders. They can also constantly monitor production status for potential problems or delays. The result is improved on-time order metrics, fewer missed opportunities, and happier customers.

For medium-term planning, comprehensive supply chain management solutions offer valuable data analytics and decision-making support. Plant managers and logistics teams can formulate optimized logistics and purchasing plans to streamline shipping and improve procurement of materials—reducing transportation costs and helping to avoid supply interruptions and inventory backlogs.

And as a long-term planning aid, these systems can be used by business decision-makers to gauge overall supply capacity, determine when production capacity needs to be scaled up, and decide if measures such as outsourcing are warranted.

Bugong has implemented its solution in several manufacturing enterprises, and the results have been impressive. Company officials estimate that the integration of different data systems alone can reduce interdepartmental communication costs by up to 50%. The technology also appears to scale well in real-world scenarios. In one large-scale deployment, Bugong set up a collaborative supply chain planning solution for a production line with more than 300 machines and 1,000 complex production processes involving more than 100,000 component parts. The system was robust enough to cope with the attendant heavy computational and data transfer requirements. Bugong reports that it was capable of processing around 5,000 orders—and all of their associated data—in less than 10 seconds.

A Blueprint for Industrywide Success

Software-based systems like the one developed by Bugong are inherently adaptable and flexible. Unlike so-called “lighthouse” smart factories, which are built primarily for demonstration purposes or to suit narrow use cases, these solutions are created to be copied and implemented by others. This opens the door for OEMs, solutions providers, and systems integrators to develop custom solutions that meet their customers’ needs—and to get new products and service offerings to market faster.

The prevalence of ERP, MES, and similar systems demonstrates that manufacturers value the efficiency enhancements that digital solutions can provide. Integrated, edge-to-cloud systems offer a whole new level of advanced management capabilities that will prove attractive to business decision-makers in the industry.

“Comprehensive digital supply chain management is a huge win for everyone in the industry,” says Kun Huang. “These solutions will help enterprises complete the digital transformation of their production processes, create new business opportunities for OEMs and SIs, and improve the efficiency and profitability of the manufacturing sector as a whole.”

 

This article was edited by Teresa Meek, Editorial Director for insight.tech.

AI-Powered Spaces That Work for Your Business: With Q-SYS

Struggling to keep your hybrid workforce engaged and productive? Enter high-impact spaces, driven by the transformative power of AI and changing the way we work and interact in both physical and digital spaces.

In this episode we dive into the exciting possibilities of high-impact spaces, exploring their potential alongside the technology, tools, and partnerships making them a reality.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guest: Q-SYS

Our guest this episode is Christopher Jaynes, Senior Vice President of Software Technologies at Q-SYS, a division of the audio, video, and control platform provider QSC. At Q-SYS, Christopher leads the company’s software engineering as well as advanced research and technologies in the AI, ML, cloud, and data space.

Podcast Topics

Christopher answers our questions about:

  • 2:19 – High-impact spaces versus traditional spaces
  • 4:34 – How AI transforms hybrid environments
  • 10:02 – Various business opportunities
  • 12:59 – Considering the human element
  • 16:23 – Necessary technology and infrastructure
  • 19:24 – Leveraging different partnerships
  • 21:10 – Future evolutions of high-impact spaces

Related Content

To learn more about high-impact spaces, read High-Impact Spaces Say “Hello!” to the Hybrid Workforce. For the latest innovations from Q-SYS, follow them on X/Twitter at @QSYS_AVC and LinkedIn at QSC.

Transcript

Christina Cardoza: Hello and welcome to the IoT Chat, where we explore the latest technology trends and innovations. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re diving into the world of high-impact AI-powered spaces with Christopher Jaynes from Q-SYS.

But before we get started, Chris, what can you tell our listeners about yourself and what you do at Q-SYS?

Christopher Jaynes: Yeah, well, thanks for having me here. This is exciting. I can’t wait to talk about some of these topics, so it’ll be good. I’m the Senior Vice President for Software and Technology at Q-SYS. So, we’re a company that enables video and audio and control systems for your physical spaces. But in reality I’m kind of a classical computer scientist. I was trained as an undergrad in computer science, got interested in robotics, followed a path into AI fairly quickly, did my PhD at University of Massachusetts.

And I got really interested in how AI and the specialized applications that were starting to emerge around the year 2000 on our desktops could move into our physical world. So I went on, I left academics and founded a technology company called Mersive in the AV space. Once I sold that company, I started to think about how AI and some of the real massive leaps around LLMs and things were starting to impact us.

And that’s when I started having conversations with QSC, got really interested in where they sit—the intersection between the physical world and the computing world—which I think is really, really exciting. And then joined the company as their Senior Vice President. So that’s my background. It’s a circuitous path through a couple different industries, but I’m now here at QSC.

Christina Cardoza: Great. Yeah, can’t wait to learn a little bit more about that. And Q-SYS is a division of QSC, just for our listeners. So I think it’s interesting—you say in the 2000s you were really interested in this, and it’s just interesting to see how much technology has advanced, how long AI has been around, but how it’s hitting mainstream or more adoption today after a couple of decades. So I can’t wait to dig into that a little bit more.

But before we get into that, I wanted to start off the conversation. Let’s just define what we mean by high-impact spaces: what are traditional spaces, and then what do we mean by high-impact spaces?

Christopher Jaynes: Yeah. I mean, fundamentally, I find that term really interesting. I think it’s great. It’s focused on the outcome of the space, right? In the old days we’d talk about a huddle room or a large conference room or a small conference room. Those are physical descriptions of a space—not so exciting. I think what’s way more interesting is what’s the intended impact of that space? What are you designing for? And high-impact spaces, obviously, call out the goal. Let’s have real impact on what we want to do around this human-centered design thing.

So, typically what you find in the modern workplace and in your physical environments now after 2020 is a real laser focus on collaboration, on the support of hybrid work styles, deep teamwork, engagement—all these outcomes are the goal. And then you bring technology and software and design together in that space to enable certain work styles really quickly—quick and interesting.

I’ll tell you one thing that’s really, really compelling for me is that things have changed dramatically. And it’s an easy thing to understand. I am at home in my home office today, right? But I often go into the office. I don’t go into the office to replicate my home office. So, high-impact spaces have gotten a lot of attention from the industry because the intent of a user or somebody to go into their space is to find something that they can’t get at home, which is this more interesting, higher-impact, technology-enabled—things you can do there together with your colleagues like bridge a really exciting collaborative meeting with remote users in a seamless way. I can’t do that here, right?

Christina Cardoza: I think that’s a really important point, especially as people start going back to the office more, or businesses have initiatives to get some more people more back in the office or really increase that hybrid workspace. Employees may be thinking, “Well, why do I have to go into the office when I can just do everything at home?” But it’s a different environment, like you said, a different collaboration that you get. And of course, we’ve had Zoom, and we have whiteboards in the office that give us that collaboration. But it’s how is AI bringing it to the next level or really enhancing what we have today?

Christopher Jaynes: Well, let me first say I think the timing couldn’t be better for some of the breakthroughs we’ve had in AI. I’ve been part of the AI field since 1998, I think, and watching what’s happened—it’s just been super exciting. I mean, I know most of the people here at QSC are just super jazzed about where this all goes—both because it can transform your own company, but what does it do about how we all work together, how we even bring products to market? It’s super, super timely.

If you look at some of the bad news around 2020, there’s some outcomes in learning and employee engagement that we’re all now aware of, right? There’s some studies that have come out that showed: hey, that was not a good time. However, if you look back at the history of the world, whenever something bad like this happens, the outcome typically means we figure it out and improve our workplace. That happened in the cholera epidemic and some of the things that happened way back in the early days.

What’s great now is AI can be brought to bear to solve some of these, what I’d call grand challenges of your space. These are things like: how would I take a remote user and put them on equal footing, literally equal footing from an engagement perspective, from an understanding and learning perspective, from an enablement perspective—how could I put them on an equal footing with people that are together in the room working together on a whiteboard, like you mentioned, or brainstorming around a 3D architectural model. How does all of that get packaged up in a way that I can consume it as a remote user? I want it to be cinematic and engaging and cool.

So if you think about AI in that space, you have to start to think about classes of AI that take—they leverage generative models, like these large language models. But they go a little bit past that into other areas of AI that are also starting to undergo their own transformations. So, these are things like computer vision; real-time audio processing and understanding; control and actuation; so, kinematics and robotics. So what happens, for example, when you take a space and you equip it with as many sensors, vision sensors, as you want? Like 10, 15 cameras—could you then take those cameras and automatically track users that walk into the space, track the user that is the most important at the moment? Like where would a participant’s eyes likely to be tracking a user if they’re in the room versus people out? How do you crop those faces and create an egalitarian view for remote users?

So that’s some work we’re already doing now that was part of what we’re doing with Vision Suite, the Q-SYS Vision Suite. It’s all driven by some very sophisticated template and face tracking, kinesthetic understanding of the person’s pose—all this fun stuff so that we can basically give you the effect of a multi-camera director experience. Somebody is auto-directing that—the AI is doing it—but when you’re remote you can now see it in exciting ways.

Audio AI—so it’s really three pillars, right? Vision AI, audio AI, and control or closed-loop control and understanding. Audio AI obviously can tag speakers and auto-transcribe in a high-impact space—that’s already something that’s here. If you start to dream a little further you can say, what would happen if all those cameras could automatically classify the meeting state? Well, why would I want to do that? Is it a collaborative or brainstorming session? Is it a presentation-oriented meeting?

Well, it turns out maybe I change the audio parameters when it’s a presentation of one to many, versus a collaborative environment for both remote and local users, right? Change the speaker volumes, make sure that people in the back of the room during the presentation can see the text on the screen. So I autoscale, or I choose to turn on the confidence monitors at the back of that space and turn them off when no one’s there to save energy.

Those are things that people used to shy away from in the AV industry because they’re complicated and they take a lot of programming and specialized behaviors and control. You basically take a space that could have cost you $100K and drive it up to $500,000, $600,000 deployments. Not repeatable, not stepable.

We can democratize all that through AI control systems, generative models that summarize your meeting. What would happen, for example, if you walked in, Christina, you walked into a room and you were late, but the AI knew you were going to be late and auto-welcomed you at the door and said, “The meeting’s been going for 10 minutes. There’s six seats at the back of the room. It’s okay, I’ve already emailed you a summary of what’s happened so that you can get back in and be more engaged.” That’s awesome. We should all have that kind of stuff, right? And that’s where I get really excited. It’s that AI not on your desktop, not for personal productivity, but where it interacts with you in the physical world, with you in that physical space.

Christina Cardoza: Yeah, I think we’re already seeing some small examples of that in everyday life. I have an Alexa device that when I ask in the morning to play music or what the weather is, it says, “Oh, good morning, Christina.” And it shows me things on the screen that I would like more personalized than anybody else in my home. So it’s really interesting to see some of these happening already in everyday life.

We’ve been predominantly talking about the business and collaboration in office spaces. I think you started to get into a couple of different environments, because I can imagine this being used in classrooms or lecture halls, stores—other things like that. So can you talk about the other opportunities or different types of businesses that can leverage high-impact spaces outside of that business or office space? If you have any customer examples you want to highlight or use cases you can provide.

Christopher Jaynes: We really operate—I just think about it in the general sense of what your physical and experience will look like. What’s that multi-person user experience going to be when you walk into a hotel lobby? How do you know what’s on the screens? What are the lighting conditions? If you have an impaired speaker at a theme park, how do you know automatically to up the audio levels? Or if somebody’s complaining in a space that says, “This sounds too echoy in here,” how do you use AI audio processing to do echo cancellation on demand?

So that that stuff can happen in entertainment venues; it can happen in hospitality venues. I tend to think more about the educational spaces partly because of my background. But also the enterprise space as well, just because we spend so much time focusing on that and we spend a lot of time in those spaces, right?

So, I want to make one point though: when we think about the use cases, transparency of the technology is always something I’ve been interested in. How transparent can you make the technology for the user? And it’s kind of a design principle that we try to follow. If I walk into a classroom or I walk into a theme park, in both of those spaces if the technology becomes the thing I’m thinking about, it kind of ruins this experience, right?

Like if you think about a classroom where I’m a student and I’m having to ask questions about: “Where’s the link for the slides again?” or, “I can’t see what’s on monitor two because there’s a pillar in the way. Can you go back? I’m confused.” Same thing if I go to a theme park and I want to be entertained and immersed in some amazing new approach to—I’m learning about space, or I’m going through a journey somewhere, and I’m instead thinking about the control system seems slow here, right?

So you need to basically set the bar so high, which I think is fun and interesting. You set the technology bar so high that you can make it transparent and seamless. I mean, when was the last time you watched a sci-fi movie? It was kind of like sci-fi movies now have figured that out, right? All the technology seems almost ghostly and ephemeral. In the 60s it was lots of video people pushing buttons and talking and interacting with their tech because it was cool. That’s not where we want to be. It should be about the human being in those spaces using technology; it makes that experience totally seamless.

Christina Cardoza: Yeah, I absolutely agree. You can have all the greatest technology in the world, but if people can’t use it or if it’s too complicated, it almost becomes useless. And so that was one of my next questions I was going to ask, is when businesses are approaching AI how are they considering the human element in all of this? How are humans going to interact with it, and how do they make sure that it is as non-intrusive as possible?

Christopher Jaynes: Yeah. And the word “intrusive” is awesome, because that does speak to the transparency requirement. But then that does put pressure on companies thinking through their AI policy, because you want to reveal the fact that your experience in the workplace, the theme park, or the hotel are all being enabled by AI. But that should be the end of it. So you’ve got to think through carefully about setting up a clear policy; I think that’s really, really key. Not just about privacy, but also the advantages and value to the end users. So, a statement that says, “This is why we think this is valuable to you.”

So if you’re a large bank or something, and you’re rolling out AI-enabled spaces, you’ve got to be able to articulate why it is valuable to the user. A privacy statement that aligns with your culture, of course, is really key. And then also allow, like I mentioned, allowing users to know when AI is supporting them.

In my experience, though, the one thing I think that’s really interesting is users will go off the rails and get worried—and also they should be, when a company doesn’t clearly link those two things together. And I mean also the vendors. So when we build systems, we should be building systems that support the user from where the data is being collected, right? I mean the obvious example is if I use Uber, then Uber needs to know where I’m located. That’s okay. Because I want them to know that—that’s the value that I’m getting so they can drive a car there, right?

If you do the same in your spaces—like you create a value loop that allows a user as they get up in a meeting and as they walk away, their laptop is left behind. But the AI system can recognize a laptop—that’s a solved problem—and auto-email me because it knows who I am. That’s pretty cool. And say, “Chris, your laptop’s in conference room 106. There’s not another meeting for another 15 minutes. Do you want me to ticket facilities, or you want to just go back and get it?”

That kind of closed-loop AI processing is really valuable, but you need to be thinking through all those steps: identity, de-identification for GDPR—that’s super, super big. And if you have this kind of meeting concierge that’s driving you that’s an AI, you have to think through where that data lives. You’d have to be responsible about privacy policies and scrubbing it. And then if a company is compliant with international privacy standards, make that obvious, right? Make it easy to find, and articulate it cleanly to your users. And then I think adoption rates go way up.

Christina Cardoza: Yeah. We were talking about the sci-fi movies earlier, where you had all the technologies and pushing buttons, and then we have the movies about the AI where it’s taking over. And so people have a misconception of what AI or this technology is really—how it’s being implemented. So, I agree: any policies or any transparency of how it’s supposed to work, how it is working, just makes people more comfortable with it and really increases the level of adoption.

You mentioned a couple of different things that are happening with lighting, or echo cancellation, computer vision. So I’m curious what the backend of implementing all of this looks like—that technology or infrastructure that you need to set up to create high-impact spaces. Is some of this technology and infrastructure already in place? Is it new investments? Is it existing infrastructure you can utilize? What’s going on there?

Christopher Jaynes: Yeah, that’s a great question, yeah. Because I’ve probably thrown out stuff that scares people, and they’re thinking, “Oh my gosh, I need to go tear everything out and restart, building new things.” The good news is, and maybe surprisingly, this sort of wave of technology innovation is mostly focused on software, cloud technologies, edge technologies. So you’re not necessarily having to re-leverage things like sensors, actuators, cameras and audio equipment, speakers and things.

So for me it’s really about—and this is something I’ve been on the soapbox on for a long time—if you can have a set of endpoints—this is one reason I even joined QSC—endpoints, actuators, and connect those through a network—like a commodity, true network, not a specialized network, but the internet, and attach that to the cloud. That to me is the topology that enables us to be really fast moving.

So that’s probably very good news to the traditional AV user or integrator, because once you deploy those hardware endpoints, as long as they’re driven by software the lifecycle for software is much faster. A new piece of hardware comes out once every four or five years. We really can release software two, three times a year, and that has new innovation, new algorithms, new approaches to this stuff.

So if you really think about those three pillars: the endpoints—like the cameras, the sensors, all that stuff in the space—connected to an edge or control processor over the network, and then that thing talking to the cloud—that’s what you need to get on this sort of train and ride it to software future because now I can deliver software into the space.

You can use the cloud for deeper AI reasoning and problem-solving for inference-generation. Analytics—which we haven’t talked about much yet—can happen there as well. So, insights about how your users are experiencing the technology can happen there. Real-time processing happens on that edge component for low latency, echo cancellation, driving control for the pan tilts—so the cameras in the space—and then the sensors are already there and deployed. So that, to me, is those three pieces.

Christina Cardoza: And I know the company recently acquired Seervision—and insight.tech and the IoT Chat also are sponsored by Intel—so I imagine you’re leveraging a lot of partnerships and collaborations to really make some of this, like the real-time analytics, happen—those insights be able to make better decisions or to implement some of these things.

So, wanted to talk a little bit more about this: the importance of your partnership with Intel, or acquiring companies like Seervision to really advance this domain and make high-impact spaces happen.

Christopher Jaynes: Oh, that’s an awesome question. Yeah, I should mention that QSC, the Q-SYS product, the Q-SYS architecture, and even the vision behind it was to leverage commodity compute to then build software for things that people at the time when it was launched thought, “No, you can’t do that. You need to go build a specialized FPGA or something custom to do real-time audio, video, and control processing in the space.” So really the roots of Q-SYS itself are built on the power of Intel processing, really, which was at the time very new.

Now I’m a computer scientist, so for me that’s like, okay, normal. But it took a while for the industry to move out of that—the habit of building very, very custom hardware with almost no software on it. With Intel processors we’re able to build—be flexible and build AV processing. Even AI algorithms now, with some of the on-chip computing stuff that’s happening, can be leveraged with Intel.

So that’s really, really cool. It’s exciting for us for sure, and it’s a great partnership. So we try to align our roadmaps together, especially when we have opportunity to do so, so that we’re able to look ahead and then deliver the right software on those platforms.

Christina Cardoza: Looking ahead at some of this stuff, I wanted to see, because things are changing rapidly every day now—I mean, when you probably first got into this in 1998 and back in the 2000s, things that we have today were only a dream back then, and now it’s reality. And it’s not only reality, but it’s becoming smarter and more intelligent every day. So how do you think the future of high-impact spaces is going to evolve or change over the next couple of years?

Christopher Jaynes: I feel like you’re going to find that there is a new employee that follows you around and supports your day, regardless of where you happen to be as you enter and leave spaces. And those spaces will be supported by this new employee that’s really the AI concierge for those spaces. So that’s going to happen faster than most people, I think, even realize.

There’s already been an AI that’s starting to show up behind the scenes that people don’t really see, right? It’s about real-time echo canceling or sound separation—audio-scene recognition’s a great one, right? That’s already almost here. There’s some technologies and some startups that have brought that to bear using LLM technologies and multi-modal stuff that’ll make its appearance in a really big way.

And the reason I say that is it’ll inform recognition in such a powerful way that not only will cameras recognize what a room state is, but the audio scene will help inform that as well. So once you get to that you can imagine that now you can drive all kinds of really cool end-user experiences. I’ll try not to speculate too much, because some of them we’re working on and they’ll only show up in our whisper suites until we actually announce them. But imagine the ability to drive to your workplace on a Tuesday, get out of your car, and then get an alert that says, “Hey, two of your colleagues are on campus today, and one of them is going to hold the meeting on the third floor. I know you don’t like that floor because of the lighting conditions, but I’ve gone ahead and put in a support ticket, and it’s likely going to be fixed for you before you get there.”

So there’s this like, in a way you can think about the old days of your spaces as being very reactive or even ignored, right? If something doesn’t work for me or I arrive late—like my example I gave you earlier of a class—it’s very passive. There’s no “you” in that picture; it’s really about the space and the technology. What AI’s going to allow us to do is have you enter the picture and get what you need out of those spaces and really flip it so that those technologies are supporting your needs almost in real time in a closed-loop fashion.

I keep saying that “closed loop.” What I mean is, the sensing that happened and has happened—maybe it’s even patterns from the last six, seven months—will drive your experience in real time as you walk into the room or as you walk into a casino or you’re looking for your hotel space. So I think there’s a lot of thinking going into that now, and it’s going to really make our spaces far more valuable for far less—way more effective for a far less cost, really, because it’s software-driven, like I mentioned before.

Christina Cardoza: Yeah, I think that’s really exciting. I’m seeing that employee follow around a little bit in the virtual space when I log into a Zoom or a Teams meeting; the project manager always has their AI assistant already there that’s taking notes, transcribing it, and doing bullet points of the most important things. And that’s just on a virtual meeting. So I can’t wait to see how this plays out in physical spaces where you don’t have to necessarily integrate it yourself: it’s just seamless, and it’s just happening and providing so much value to you in your everyday life. So, can’t wait to see what else happens—especially from Q-SYS and QSC, how you guys are going to continue to innovate from this space.

But before we go, just want to throw it back to you one last time. Any final thoughts or key takeaways you want to leave our listeners with today?

Christopher Jaynes: Well, first let me just say thanks a lot for hosting today; it’s been fun. Those are some really good questions. I hope that you found the dialogue to be good. I guess the last thought I’d say is, don’t be shy. This is going to happen; it’s already happening. AI is going to change things, but so did the personal computer. So did mobility and the cell phone. It changed the way we interact with one another, the way we cognate even, the way we think about things, the way we collaborate. The same thing’s happening again with AI.

It’ll be transformative for sure, so have fun with it. Be cautious around the privacy and the policy stuff we talked about a little bit there. You’ve got to be aware of what’s happening, and really I think people like me, our job in this industry is to dream big at this moment and then drive solutions, make it an opportunity, move it to a positive place. So it’s going to be great. I’m excited. We are all excited here at Q-SYS to deliver this kind of value.

Christina Cardoza: Absolutely. Well, thank you again for joining us on the podcast and for the insightful conversation. Thanks to our listeners for tuning into this episode. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Image Segmentation: Exploring the Power of Segment Anything

Innovation in technology is an amazing thing, and these days it seems to be moving faster than ever. (Though never quite fast enough that we stop saying, “If only I had had this tool, then how much time and effort I would have saved!”) This is particularly the case with AI and computer vision, which transform operations across industries and become incredibly valuable for many kinds of business. And in the whole AI/computer vision puzzle, one crucial piece is image segmentation.

Paula Ramos, AI Evangelist at Intel, explores this rapidly changing topic with us. She discusses image-segmentation solutions past, present, and future; dives deep into the recently released SAM (Segment Anything Model) from Meta AI (Video 1); and explains how resources available from the Intel OpenVINO toolkit can make SAM even better.

Video 1. Paula Ramos, AI Evangelist at Intel, discusses recent advancements powering the future of image segmentation. (Source: insight.tech)

What is the importance of image segmentation to computer vision?

There are multiple computer vision tasks, and I think that image segmentation is the most important one. It plays a crucial role there in object detection, recognition, and analysis. Maybe the question is: Why is it so important? And the answer is very simple: Image segmentation helps to isolate individual objects from the background or from other objects. We can localize important information with image segmentation; we can create metrics around specific objects; and we can extract features that can help in understanding one specific scenario—all really, really important to computer vision.

What challenges have developers faced building image-segmentation solutions in the past?

When I was working with image segmentation in my doctoral thesis, I was working in agriculture. I faced a lot of challenges with it because there were multiple techniques for segmenting objects—thresholding, edge detection, region growing—but no one-size-fits-all approach. And depending on which technique you are using, you need to carefully define the best approach.

My work was in detecting coffee beans, and coffee beans are so similar, are so close together! Maybe there were also red colors in the background that would be a problem. So there was over-segmentation—merging objects—happening when I was running my image-segmentation algorithm. Or there was under-segmentation, and I was missing some of the fruits.

That is the challenge with data especially when it comes to image segmentation, because it is difficult to work in an environment where the light is changing, where you have different kinds of camera resolution. Basically, you are moving the camera, so you get some blurry images or you get some noise in the images. Detecting boundaries is also challenging. Another challenge for traditional image segmentation is the scalability and the efficiency. Depending on the resolution of the images or how large the data sets are, the computational cost will be higher, and that can limit the real-time application.

And in most cases, you need to have human intervention to use these traditional methods. I could have saved a lot of time in the past if I had had the newest technologies in image segmentation then.

What is the value of Meta AI’s Segment Anything Model (SAM) when it comes to these challenges?

I would have especially liked to have had the Segment Anything Model seven years ago! Basically, SAM improves the performance on complex data sets. So those problems with noise, blurry images, low contrast—those are things that are in the past, with SAM.

Another good thing SAM has is versatility and prompt-based control. Unlike traditional methods, which require specific techniques for different scenarios, SAM has this versatility that allows users to specify what they want to segment through prompts. And prompts could be point, boxes, or even natural language description.

“Image segmentation is one of the most important #ComputerVision tasks. It plays a crucial role in object detection, recognition, and analysis,” – Paula Ramos, @intel via @insightdottech

I would love to have been able to say, in the past, “I want to see just mature coffee beans” or “I want to see just immature coffee beans,” and to have had that flexibility. That flexibility can also empower developers to handle diverse segmentation tasks. I also mentioned scalability and efficiency earlier: With SAM the information can be processed faster than with the traditional methods. So these real-time applications can be made more sustainable, and the accuracy is also higher.

For sure, there are some limitations, so we need to balance that, but for sure we are also improving the performance on those complexities.

What are the business opportunities with the Segment Anything Model?

The Segment Anything Model presents several potential business opportunities across all different image-segmentation processes that we know at this point. For example, creating content or editing content in an easy way, automatically manipulating emails, or creating real-time special effects. Augmented reality or virtual reality is also a field that is heavily impacted by SAM, with the real-time object detection enabling the virtual elements in the interactive experience.

Another thing is maybe product segmentation in retail. SAM can automatically segment product images in online stores, enabling more efficient product sales. Categorization based on specific object features is another possible area. I can also see potential in robotics and automation to achieve more precise object identification and manipulation in various tasks. And autonomous vehicles, for sure. SAM also has the potential to assist medical professionals in tasks like tumor segmentation or making more accurate diagnoses—though I can see that there may be a lot of reservations around this usage.

And I don’t want to say that those businesses will be solved with SAM; it is a potential application. SAM is still under development, and we are still improving it.

How can developers overcome the limitations of SAM with OpenVINO?

I think one of the good things right now in all these AI trends is that so many models are open source, and this is also a capability that we have with SAM. OpenVINO is also open source, and developers can access this toolkit very easily. Every day we put multiple AI trends in the OpenVINO Notebooks repository—something happens in the AI field, and two or three days after that we have the notebook there. And good news for developers: We already have optimization pipelines for SAM in the OpenVINO repository.

We have a series of four notebooks there right now. The first one we have is the Segment Anything Model that we have been talking about; this is the most common one. You can compile the model and use OpenVINO directly, and also you can optimize the model using the neural network compression framework—NNCF.

Second, we have the Fast Segment Anything Model. The original SAM is a heavy transformer model that requires a lot of computational resources. We can solve the problem with quantization, for sure, but FastSAM decouples the Segment Anything task into two sequential stages using YOLOv8.

We then have EfficientSAM, a lightweight SAM model that exhibits the SAM performance with largely reduced complexity. And the last resource, which was just posted in the OpenVINO repository recently is GroundingDINO plus Sam, called GroundedSAM. The idea is to find the bounding boxes and at the same time segment everything in those bounding boxes.

And the really good thing is that you don’t need to have a specific machine to run these notebooks; you can run them on your laptop and see the potential to have image segmentation with some models right there.

How will OpenVINO continue to evolve as SAM and AI evolve?

I think that OpenVINO is a great tool for reducing the complexity of building deep learning applications. If you have expertise in AI, it’s a great place to learn more about AI trends and also to understand how OpenVINO can improve your day-to-day routine. But if you are a new developer, or if you are a developer but you are no expert in AI, it’s a great starting point as well because you can see the examples that we have there and you can follow up every single cell in the Jupyter Notebooks.

So for sure we’ll continue creating more examples and more OpenVINO notebooks. We have a talented group of engineers working on that. We are also trying to create meaningful examples—proofs of concept that can be utilized day to day.

Another thing is that last December the AI PC was launched. I think that this is a great opportunity to understand capabilities that we are enhancing every day—improving the hardware that developers utilize so that they don’t need to have any specific hardware to run the latest AI trends. It is possible to run models on your laptop and also improve your performance.

I was a beginner developer myself some years ago, and I think for me it was really important to understand how AI was moving at that moment, to understand the gaps in the industry, stay one step ahead of the curve, improve, and to try to create new things.

And something else that I think is important for people to understand is that we are looking for what your need is: What are the kinds of things you want to do? We are open to contributions. Take a look at the OpenVINO Notebooks repository and see how you can contribute to it.

Related Content

To learn more about image segmentation, listen to Upleveling Image Segmentation with Segment Anything and read Segment Anything Model—Versatile by Itself and Faster by OpenVINO. For the latest innovations from Intel, follow them on X at @Intel and on LinkedIn.

 

This article was edited by Erin Noble, copy editor.

“Bank on Wheels” and Edge Computing Serve Rural Communities

Think about the last time you withdrew money from an ATM, used a line of credit, or made a deposit. Most of us take these essential financial services for granted. But millions of rural citizens worldwide don’t have a bank account, and when they do, the convenience of branch banking is not available.

“It’s not cost-effective for banks to open new branches in remote locations,” says Amit Jain, Managing Director of Bits & Bytes, a smart kiosk and digital signage specialist. “And banks in rural areas often face infrastructure problems like power cuts and network outages.”

When people can’t get to the bank, it’s not merely an inconvenience. It’s also an issue of equity when citizens can’t fully participate in the wider economy. But a new kind of edge solution addresses this problem in a surprising way: bringing the bank branch to the people.

Powered by edge computing hardware and telecommunications networks, Bits & Bytes developed a “Bank on Wheels” that is already improving access to financial services in remote communities in India, and is poised to enter other markets around the world.

Rural Branch Banking in Action

A Bits & Bytes mobile branch deployment in India is an excellent example of how these solutions can help. Maharashtra state is one of India’s most populated and heavily industrialized regions. But more than 50% of the state’s population lives in rural areas, leaving many citizens without access to the services their urban counterparts enjoy.

Working with a large national bank, Bits & Bytes developed a solution that can perform many functions of a traditional branch and can be driven from location to location as needed.

The heart of the system is a #digital kiosk that runs on rugged, edge-friendly computing #hardware. Bits & Bytes via @insightdottech

The heart of the system is a digital kiosk that runs on rugged, edge-friendly computing hardware and has a built-in camera and fingerprint scanner for biometric authentication and a touchpad for user interaction. The kiosk is installed in a van that can be driven to different rural areas and parked as long as needed.

The system is connected to the bank in two ways. A data card allows it to communicate with the institution’s centralized server via standard cellular networks—and a bank employee can ride along with the driver to help new customers learn how to use the technology and answer questions.

The mobile kiosk helps customers open a new account, obtain a debit card, and perform transactions like cash withdrawals, deposits, loan applications, bill payments, and transfers.

After deployment, the bank on wheels is a resounding success with customers. “Before, some of these people had to pay specialized agents a fee to travel to the nearest branch in person and perform transactions for them,” says Jain. “They were delighted to be able to do their own transactions directly for the first time.”

Ensuring Compliance and Security at the Edge

Financial systems have stringent security and compliance requirements that vary from country to country. Flexible design and edge capabilities help overcome these challenges and make it possible to deploy the solution in many different markets.

For example, the Bits & Bytes solution complies with India’s strict “know-your-customer” laws, using its secure network connection and biometric authentication capabilities. The mobile banking kiosk performs basic biometric scanning and then communicates with a bank server connected to the central government database. After authentication, a pre-filled application form is fetched and needs only to be signed on the touchpad to finish opening the account.

The elegance of the basic design—an edge IPC and modular hardware linked to a central server over a cellular network connection—means that the system immediately becomes a part of the bank’s existing network. This also means that no personal user data is stored at the edge. Everything is kept within the financial institution’s network—with all the data privacy and cybersecurity precautions this implies.

Plus, a mobile branch can easily be adapted to new regions with different data privacy and regulatory requirements. Because those countries’ regulations have already been met by the financial institution, there is no need for extensive customization to the kiosk software.

Bits & Bytes’ technology partnership with Intel is crucial to the solution. “Intel hardware provides an excellent platform for edge computing,” says Jain. “Intel also plays a vital role in product development, helping us to adapt off-the-shelf Intel technologies to bring new products to market.”

Edge Computing Powers Digital Transformation

The ability to solve rural banking shortages and increase the number of customers will likely attract the attention of bank digitization departments and financial industry integrators (SIs).

The rise of edge computing has not only enabled systems like the Bits & Bytes mobile banking kiosks—it also has the potential to tackle tough problems in multiple industries. In the years ahead, expect to see more innovative solutions deployed at the edge, from autonomous mobile robots in agriculture to private 5G networks for mining operations.

The bank on wheels is an excellent example of the current wave of digital transformation at the edge—and AI will open up even more opportunities in the coming years.

“We’re living in an era of rapid technological advancement in nearly every sector—which is why as a company we offer products for so many different verticals,” says Jain. “Five years down the line, when AI and IoT are everywhere, all kinds of people and organizations will be able to enjoy the benefits of digital transformation.”

 

Edited by Georganne Benesch, Editorial Director for insight.tech.

Retail AI Solutions Speed Checkout—From Shops to Stadiums

In every type of commerce—from supermarkets to quick-serve restaurants and retail stores—consumers expect product choice, convenience, and timely service. But picture shopping in crowded venues like concerts, fairs, and sporting events, which can be frustrating for customers and challenging for concessionaires. When people go to grab a bite to eat or purchase souvenirs, they don’t want to miss a moment of action.

Retailers and shoppers alike find that self-service—be it at a sporting event or at a shopping mall—is a great option. For example, self-checkout kiosks and POS terminals powered by computer vision, AI-based scanners, and even voice recognition can eliminate long lines and expedite checkout. These technologies help retailers stay ahead of the competition—offering more convenience to customers while streamlining operations.

But it can be a challenge to deploy systems that communicate seamlessly with one another and with existing machines and merchandise. Retail systems integrators (SIs) that orchestrate self-checkout solutions face countless hours evaluating all the different hardware and software options to build a custom solution for each venue.

This is where experienced solutions aggregators come into play. They have tested and deployed many of the cutting-edge technologies available in today’s retail market, and have the know-how to integrate them into complete, end-to-end solutions. By collaborating with aggregators, SIs can save time, better serve their customers’ needs, and gain assurance that the advanced technologies they provide will function as intended.

Retail AI Solutions Delivered Right Out of the Box

For many retailers, automation and self-service technology can’t come soon enough. Stressed by employee shortages, demanding customers, and inflation-squeezed profit margins, they approach SIs to find ways of increasing efficiency, says David Lester, Business Development Manager at BlueStar, Inc., a global supplier of technology solutions for retailers, manufacturers, logistics companies, and other industries.

Specialized technologies have been developed to enhance efficiency and improve the customer experience across the retail spectrum. Working closely with SIs, BlueStar has assembled 30 unique “In-a-Box” solutions for retail operations ranging from quick-serve restaurants to malls, hotels, grocery stores, and boutiques. These ready-to-go bundled packages contain all the hardware, software, and accessories SIs need for deployment, minimizing decision-making and reducing setup time, Lester says.

“If you’re a systems integrator for a quick-serve restaurant, the last thing you want to do is source individual pieces for scanning, payment processing, inventory management, and everything else involved in a point-of-sale system. With a BlueStar In-a-Box solution, you open the box, set it on the counter, and start using it then and there,” Lester adds.

Helping Systems Integrators with AI Automation Technology

One increasingly popular retail technology—especially at drive-through QSRs—is voice-based AI, Lester says. For this use case, BlueStar partners with Sodaclick, a provider of interactive voice technology for digital ordering. “We like Sodaclick conversational voice AI because it is very good at understanding what customers want,” Lester says.

The Sodaclick conversational virtual assistant, used at drive-throughs and kiosks, uses Intel® RealSense 3D cameras to recognize approaching customers, and can be programmed to understand English, Spanish, Mandarin Chinese, and more than 100 other languages and regional accents. The system responds to customers in natural-sounding language and can offer suggestions and promotions based on demographics, time of day, or other metrics that retailers choose.

The combination of voice recognition and #ComputerVision also works well at stores with self-service #payment systems, where merchandise recognition can be tricky. @Think_BlueStar via @insightdottech

The combination of voice recognition and computer vision also works well at stores with self-service payment systems, where merchandise recognition can be tricky.

That was the case at the Fayetteville, Georgia-based fully automated grocery store Nourish + Bloom Market. When the store’s item-recognition software failed to properly account for salads and other deli items, the company asked SI UST Global Inc for help. UST worked with BlueStar to upgrade the store’s checkout experience with Sodaclick Conversational Voice AI, the UST Vision Checkout system, and automated payment processors, as well as kiosks, scales, cables, and other associated hardware.

Customers can now purchase any items in the store without human assistance. UST Vision Checkout includes ceiling-mounted cameras, to recognize and record the prices of packaged items as soon they are removed from shelves and placed in a shopping cart. For salads and other deli products that must be weighed, the customer describes the item to the Sodaclick voice assistant before placing them on a scale. Coordination between the voice system and computer vision cameras results in accurate pricing. After all items are checked, the customer simply tells the voice assistant “pay now” and completes the transaction with a cell phone. “It’s a frictionless process and a great convenience for customers,” Lester says.

Building Tomorrow’s Retail Infrastructure

As edge AI capabilities improve, BlueStar is expanding the range of its solutions. For example, it is currently developing integrations with clothing technology company FIT:MATCH, which uses Lidar and AI to capture 3D images of a customer’s body shape and match them to a digital twin in the database. The system can then make individualized recommendations for products and sizing.

Working with Intel helps BlueStar keep up with innovative applications such as this. “Intel plays a pivotal role with us, especially for our In-a-Box solutions,” Lester says. “They’ve helped us tremendously in learning about AI solutions and deploying them as cost-effectively as possible.”

As futuristic as some of the new AI applications may sound, Lester believes they will continue to improve: “I’m seeing advancements in artificial intelligence every month. I think voice AI and digital signage will evolve to become more intuitive, improving contextual understanding and providing even more personalized experiences and better customer engagement.”

Discover more about voice enabled solutions. Listen to our podcast: QSRs—Voice AI Will Now Take Your Order: With Sodaclick.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

This article was originally published on April 8, 2024.

Patient-Centered AI Redefines Continuum of Care

Healthcare professionals have a singular mission: provide the best possible care for patients. But from admittance to discharge and everything in between, they face countless challenges.

Persistent staff shortages, constrained resources, and tight budgets are just a few. The greatest challenge is access to essential information about a patient’s condition throughout their hospital journey, specifically the second-by-second time series waveform data generated from the biomedical devices monitoring a patient. When seconds matter, how do hospitals harness this data and make it easily accessible to their healthcare teams?

Why Time Series Data Matters

The answer to this challenge is a single, open platform that continuously collects, processes, and unifies disparate data and presents it to clinicians in real time. Take, for example, an eight-hospital system in Houston that was confronting staffing issues and limited provider coverage—especially overnight. This forced difficult decisions, like hiring more travel nurses and physicians or turning patients away. All that changed when the organization implemented the Sickbay® Clinical platform, a vendor-neutral, software-based monitoring and analytics solution, from Medical Informatics Corp. (MIC).

The platform enables flexible care models and the #development and deployment of patient-centered #AI at scale on a single, interconnected architecture. @sickbayMIC via @insightdottech

Sickbay is an FDA-cleared, software-based clinical platform that can help hospitals standardize patient monitoring. The platform enables flexible care models and the development and deployment of patient-centered AI at scale on a single, interconnected architecture. Sickbay redefines the traditional approach of storage and access to static data contained in EMR systems and PACS imaging. The web-based architecture brings near real-time streaming and standardized retrospective data to care teams wherever they are to support a variety of workflows with the same integration. This includes embedded EMR reporting and monitoring data on PCs and mobile devices.

“Out of about 800,000 data points generated each hour for a single patient from bedside monitoring equipment, only about two dozen data points are available for clinical use,” says Craig Rusin, Chief Product & Innovation Officer and cofounder at MIC. It’s not widely known that alarms from non-networked devices such as ventilators outside of a patient’s room are difficult for staff to hear or view remotely. Similarly, current patient monitoring doesn’t use AI tools with the existing data to inform patient care.

Measuring the Impact

Hospitals and healthcare systems using Sickbay have redefined patient monitoring and have created a new standard of flexible, data-driven care by demonstrating the ability to:

  • Rapidly add bed and staff capacity while creating flexible virtual care models that go beyond traditional tele-sitting, admit, and discharge.
  • Provide more near real-time and retrospective data to staff already on unit, on service, or on call to improve their workflows and delivery of care.
  • Create virtual nursing stations where one nurse can monitor 50+ patients on a single user interface across units and/or facilities.
  • Leverage the same infrastructure to create virtual command centers that monitor patients across the continuum of care.

No matter the method of deployment, Sickbay gives control back to healthcare teams and provides direct benefit back to the hospital. Benefits reported include reduced labor, capital, and annual maintenance costs as well as improved staff, patient, and family satisfaction. Most important, clients using Sickbay see direct impact on improvements in quality of care and outcomes, including reductions in length of stay, code blue events, ICU transfers, time on vent, time for dual sign-off, and time to treat.

Results such as these provide the pathway for other hospitals to rethink patient monitoring and realize the vision of near real-time, patient-centered AI. Healthcare leaders have proven that going back to team-based nursing by adding virtual staff can help reverse the staffing crisis. “This isn’t about taking nurses away from patients. This is about taking some of the tasks and centralizing them,” says Rusin. “There will never be enough nurses, physicians, and respiratory therapists to cover all of the demand required for the foreseeable future. We need to get bedside teams back to bedside care. Flexible, virtual care support makes that a reality.”

Changing the Economics of Care

Sickbay provides the ability to change the economics of monitoring patients and directly impact improvement in quality and outcomes.

The ability to integrate with different devices, regardless of function or brand, is the key. “We have created an environment that allows our healers to get access to data they have never had before and build content on top of that, in an economically viable way that has never been achieved,” Rusin says.

For healthcare providers, having the data available is game-changing, says MIC EVP of Strategic Market Engagement, Heather Hitchcock. As one doctor commented: “In a single minute, I have to process 300 data points. No machine is ever going to make a decision for me, but Sickbay helps me process that data faster so I can make the right decision and save more lives.”

From Scalable Patient Monitoring to Predictive Analytics

Sickbay’s value extends beyond near real-time patient monitoring and virtual care to long-term treatment improvements. Sickbay supports the ability to leverage the same data to develop and deploy predictive analytics to help get ahead of deterioration and risk.

Clients currently and continuously develop analytics on Sickbay. For example, one client integrates 32 near real-time, multimodal risk scores into its virtual care workflow. Another client created a Sickbay algorithm that analyzes data generated by two separate monitoring devices to determine ideal blood pressure levels in patients. “The particular analytic requires the blood pressure waveform from a bedside monitor and a measure of cerebral blood density from a different monitor,” says Rusin.

Saving Lives with Data

Treatment of patients across the care continuum today will lead to improved care tomorrow. To do that, reliable, specific data is the very starting block. Without it, clinicians are left to their best guesses to solve the body’s most urgent care needs without the data-driven decision-making support they desire. That’s slow, costly, unfair to caregivers, and ultimately not providing the best benefit for the patient.

To truly realize a future where treatment is as specific and individual as the person it serves, healthcare must harness patient data in a way that is most impactful—specific, accurate, near real-time, vendor-agnostic, transformable, and instantly accessible. Leveraging the power of time series data empowers healthcare providers to help more people than it ever has before, and more effectively. After all, saving lives is healthcare’s primary mission.

 

Edited by Georganne Benesch, Editorial Director for insight.tech.

Transforming the Factory Floor with Real-Time Analytics

Manufacturers are under a lot of pressure to take advantage of all the intelligent capabilities available to them—technologies like machine vision and AI-driven video analytics. These can be crucial tools to enable everything from defect detection and prevention to worker safety. But very few manufacturers are experts in the AI space, and there are many things to master and many plates to keep in the air—not to mention future-proofing a big technology investment. Those new technologies need to be created to be adaptable and interoperable.

Two people who can speak to these needs are Jonathan Weiss, Chief Revenue Officer at industrial machine vision provider Eigen Innovations; and Aji Anirudhan, Chief Sales and Marketing Officer at AI video analytics company AllGoVision Technologies. They talk about the challenges of implementing Industry 4.0, what manufacturers have to do to take advantage of the data-driven factory, and how AI continues to transform the factory floor (Video 1).

Video 1. Industry experts from AllGoVision and Eigen Innovations discuss the transformative impact of AI in manufacturing. (Source: insight.tech)

How can machine vision and AI address Industry 4.0 challenges?

Jonathan Weiss: All we do is machine vision for quality inspection, and we’re hyper-focused in industrial manufacturing. Traditional vision systems really lend themselves to detecting problems within the production footprint, and they will tell you if the product is good or bad, generally speaking. But then, how do you help people prevent defects, not just tell them they’ve produced one?

And that’s where our software is pretty unique. We don’t just leverage vision systems and cameras and different types of sensors, we also interface directly with process data—historians, OPC UA servers, even direct connections to PLCs at the control-network level. We give people insights into the variables and metrics that actually went into making the part, as well as what went wrong in the process and what kind of variation occurred that resulted in the defect. And a lot of what we do is AI and ML based.

How can video analytics address worker risk in the current industrial environment?

Aji Anirudhan: The primary thing in this industry is asking how you enhance the automation, how you bring in more machines. But people are not going to disappear from the factory floor, which basically means that there is going to be more interaction between the people and the machines there.

The UN has some data that says that companies spend $2,680 billion annually on workplace injuries and damages worldwide. This cost is a key concern for every manufacturer. Traditionally, what they have done is looked at different scenarios in which there were accidents and come up with policies to make sure those accidents don’t happen again.

But that’s not enough to bring these costs down. There could be different reasons why the accidents are happening; a scenario that is otherwise not anticipated can still create a potential accident. So you have to have a real-time mechanism in place that actually makes sure that the accident never happens in the first place.

That means that if a shop-floor employee is supposed to wear a hard hat and doesn’t, it is identified so that frontline managers can take care of it immediately—even if an accident hasn’t happened. The bottom line is: Reducing accidents means reduced insurance costs, and that adds to a company’s top line/bottom line.

In the industrial-manufacturing segment, it’s a combination of different behavioral patterns of people, or different interactions between people and machines or people and vehicles. And what we see in worker-safety requirements is also different between customers: oil and gas has different requirements from what is needed in a pharmaceutical company—the equipment, the protective gear, the safety-plan requirements.

For example, we worked with a company in India where hot metal is part of the production line, and there are instances when it gets spilled. It’s very hazardous, both from a people-safety and from a plant-safety point of view. The company wants it continuously monitored and immediately reported if anything happens. 

Are manufacturers prepared to take on the data-driven factory at this point?

Jonathan Weiss: Manufacturers as a whole are generally on board with the need to digitize, the need to automate. I do think there’s still a lot of education required on the right way to go about a large-scale initiative—where to start; how to ensure program effectiveness and success; and then how to scale that out beyond factories.

In my world, it’s helping industrials overcome the challenges of camera systems being siloed and not communicating with other enterprise systems. Also, not being able to scale those AI models across lines, factories, or even just across machines. That’s where traditional camera systems fail. And at Eigen, we’ve cracked that nut.

“By bringing vision systems and #software tools to factories, we’re enabling them to inspect parts faster” – Johnathan Weiss, @EigenInnovation via @insightdottech

But what Aji and I do is a small piece of a much larger puzzle, and the one common thread in that puzzle is data. That’s how we drive actionable insights or automation, by creating a single source of truth for all production data. Simply put, it’s a single place to put everything—quality data, process data, safety data, field-services-type data, customer data, warranty information, etc. Then you start to create bidirectional connections with various enterprise-grade applications so that ERP knows what quality is looking at, and vice versa.

It’s having that single source of truth, and then having the right strategy and architecture to implement various types of software into that single source of truth for the entire industrial enterprise.

How can manufacturers apply machine vision to factory operations?

Jonathan Weiss: You have to understand first what it is that you’re trying to solve. What is the highest value defect that occurs the most frequently that you would like to mitigate?

In the world of welding it’s often something that the human eye can’t see, and vision systems become very important. You need infrared cameras in complex assembly processes, for example, because a human eye cannot easily see all around the entire geometry of a part to understand if there’s a defect somewhere—or it makes it incredibly challenging to find it.

It’s finding a use case that’s going to provide the most value and then working backwards from there. Then it’s all about selecting technology. I always encourage people to find technology that’s going to be adaptable and scalable, because if all goes well, it’s probably not going to be the only vision system you deploy within the footprint of your plant.

Aji Anirudhan: Most factories are now covered with CCTV cameras for compliance and other needs, and our requirements at AllGoVision easily match with the in/output coming from them. Maybe the position of the camera should be different, or the lighting conditions. Or maybe very specific use cases require a different camera—maybe a thermal camera. But 80% of the time we can reuse existing infrastructure and ride on top of the video feed.

What’s the importance of working with partners like Intel?

Aji Anirudhan: We were one of the first video-analytics vendors to embrace the Intel open-window architecture. We have been using Intel processes from the early versions all the way to Gen4 and Gen5 now, and we’ve seen a significant improvement in our performance. What Intel is doing in terms of making platforms available and suitable for running deep learning-based models is very good for us.

Some of the new enhancements for running those deep learning algorithms—like the integrated GPUs or the new Arc GPUs—we are excited to see how we can use them to make it more effective to run our algorithm. Intel is a key partner with respect to our current strategy and also going forward.

As this AI space continues to evolve, what opportunities are still to come?

Jonathan Weiss: At Eigen, we do a variety of types of inspections. One example is inspecting machines that put specialty coatings on paper. One part of the machine grades the paper as it goes through, and you only have eight seconds to catch a two-and-a-half-millimeter buildup of that coating on the machine or it does about $150,000 worth of damage. And that can happen many, many times throughout the course of a year. It can even happen multiple times throughout the course of a shift.

And when I think about what the future holds, we have eight seconds to detect that buildup and automate an action to prevent equipment failure. We do it in about one second right now, but it’s really exciting to think about when we do it in two-thirds of a second or half a second in the future. 

So I think what’s going to happen is that technology is just going to become even more powerful, and the ways that we use it are going to become more versatile. I see the democratization of a lot of these complex tools gaining traction. And at Eigen, we build our software from the ground up with the intent of letting anybody within the production footprint, with any experience level, be able to build a vision system. That’s really important to us, and it’s really important to our customers.

Although in our world we stay hyper-focused on product quality, there’s also the same idea that Aji mentioned earlier that people aren’t going away. And I think that speaks to a common misconception of AI, that it is going to replace you; it’s going to take your job away. What we see in product quality is actually the exact opposite of that: by bringing vision systems and software tools to factories, we’re enabling them to inspect parts faster. Now they’re able to produce more, which means the company is able to hire more people to produce even more parts.

A lot of my customers say that some of the highest turnover in their plants is in the visual-inspection roles. It can be an uncomfortable job—standing on your feet staring at parts going past you with your head on a swivel for 12 hours straight. And so this may have been almost a vitamin versus a painkiller sort of need, but it’s no longer a vitamin for these businesses. We’re helping to alleviate an organizational pain point, and it’s not just a nice-to-have.

Aji Anirudhan: What is interesting is all the generative AI, and how we can utilize some of those technologies. Large vision models basically look at explaining complex vision or complex scenarios. I’ll give an example: There is an environment where vehicles go but a person is not allowed to go. And the customer says, “Yes, the worker can move on that same path if he’s pushing a trolley.” But how do you define if the person is with a trolley or without a trolley?

So we are looking at new enhancements in technology, like the LVMs, to bring out new use cases. Generative AI technology is going to help us address these use cases in the factory in a much better way in the coming years. But we still have a lot to catch up on. So we are excited about technology; we are excited about the implementation that is going on. We look forward to a much bigger business with various customers worldwide.

Related Content

To learn more about AI-powered manufacturing, listen to AI-Powered Manufacturing: Creating a Data-Driven Factory and read Machine Vision Solutions: Detect and Prevent Defects. For the latest innovations from AllGoVision and Eigen Innovations, follow them on Twitter/X at @AllGoVision and @EigenInnovation, and on LinkedIn at AllGoVision and Eigen Innovations Inc.

 

This article was edited by Erin Noble, copy editor.

Secure and Simple to Deploy: Private 5G Network-in-a-Box

For a while, 5G was mainly a consumer must-have. After all, a 5G-enabled smartphone enables faster streaming and improves overall performance, something practically everyone wants. The appeal of 5G spread to the enterprise and public sector with the rise of sensor-driven data analytics. When every device is an edge node transmitting data, reliably accessing derived information with low latency became a priority for B2B sectors.

And given the high stakes riding on reliable and always-available data, more public sector and private enterprises ensure their business processes with private 5G networks, says Yazz Krdzalic, VP of Marketing at Trenton Systems, provider of resilient, high-performance computing solutions.

Moving up from 4G to 5G was about “adding three swim lanes: ultra-reliable, low-latency data transmission; control and management of security; and the ability to accommodate hundreds of thousands of interconnected devices communicating simultaneously. Now you can take care of your biggest pain points with one technology,” Krdzalic says.

Private #5G shifts the public sector and private enterprise off the public #cloud to a dedicated secure #network. @TrentonSystems via @insightdottech

Private 5G Networks Boost Data Security

But despite its many advantages, 5G has one weakness: It depends on public infrastructure.

When natural disasters strike, communication infrastructure, including cell towers, can be out of commission. Satellite communications are a shaky backup when first responders need immediate on-the-ground information transmitted securely. It’s wiser to lean on a private 5G network in such instances.

“You put up your own antenna, your radio units, and you’re up and running. You’re also able to work in disconnected mode, which means you’re not tethered to the mothership,” Krdzalic says. Groups can share information with one another, and when they get within range of a working cell tower or satellite communications, they can relay data.

Private 5G shifts the public sector and private enterprise off the public cloud to a dedicated secure network. “You take the value-add of 5G and add it to your own private bubble,” Krdzalic says.

The possibility of a cyber-breach increases with the number of data-transmitting nodes, so the ability to apply more security policies makes private 5G an especially attractive proposition for today’s business operations. A “private bubble” fortifies security, a key element for both public and private sectors.

The Private 5G Network Solution

Recognizing that organizations looking to deploy private 5G might want to avoid assembling the components themselves, Trenton Systems developed the Integrated Edge Solution with Private 5G (IES.5G). The “network in a box” unites all the components—rugged hardware, enhanced processors, software, and security—into one unit. “Instead of a million moving pieces, we spent a lot of time with our partners figuring out how to develop an easy button for private 5G deployments,” Krdzalic says.

Intel, ZScaler, as well as RAN and 5G Core software vendors are among the partners who bring their strengths to the product. Trenton Systems provides edge computing platforms or rugged servers designed to work under extreme conditions. ZScaler delivers a zero-trust cloud-based cybersecurity platform, while the various RAN and 5G Core vendors’ software drives the unit’s connectivity.

The underlying architecture—from the CPUs to the accelerators to the adapters on the system—is provided and powered by Intel. The product can run on Intel® Xeon® Scalable Processors. Organizations on 3rd Gen Intel® Xeon® Scalable processors can get the Intel® vRAN Accelerator ACC100 Adapter for high-capacity 4G and 5G vRAN deployment or the Intel® Quick Assist Technology (Intel® QAT) Adapter. The system includes Intel® FlexRAN, which enables control of the underlying RAN architecture and is “the quality foundational piece which the RAN and the core sit on top of,” Krdzalic says. “The motherboards themselves may come equipped with Ethernet adapters or an Intel E810 NIC card, which is another thing we are incorporating into this solution to make it easier to test the IES.5G in the lab and field-deployed environments,” he adds.

Computing Efficiencies and Private 5G Use Cases

Scalability and the ability to slice and dice bandwidth depending on needs are additional advantages of the IES.5G, Krdzalic says. IT can scale the server-in-a-box units up and down and can bulk up on just computing power if that’s what’s needed.

The IES.5G solution also enables Virtual Network Functions (VNF) and Cloud Network Functions (CNF), ways of using software to deliver network services instead of relying exclusively on hardware. These capabilities enable IT to slice network bandwidth and distribute it depending on needs. “You’re able to finely articulate how you would like to utilize your full bandwidth; you don’t have to assume it’s just an on-off switch,” Krdzalic says. First responders, for example, can access the best available bandwidth at all times. Because not everyone always needs maximum capacity, the infrastructure is also not running at maximum at all times but instead consumes only as much energy as absolutely necessary.

In addition to the public sector, use cases for private 5G networks run the gamut across manufacturing and healthcare settings. The future is all about ubiquitous and continuous connectivity, and it starts with private 5G.

After all, as Krdzalic says, “private 5G is like saying ‘I have my own data center, I have my own ISP, and I have my own equipment and everything I need to ensure that my team and I have connectivity and compute power right where we need it, when we need it.’”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Upleveling Image Segmentation with Segment Anything

Across all industries, businesses actively adopt computer vision to improve operations, elevate user experiences, and optimize overall efficiency. Image segmentation stands out as a key approach, enabling various applications such as recognition, localization, semantic understanding, augmented reality, and medical imaging. To make it easier to develop these types of applications, Meta AI released the Segment Anything Model (SAM)—an algorithm for identifying and segmenting any objects in an image without prior training.

In this podcast, we look at the evolution of image segmentation, what the launch of Meta AI’s Segment Anything model means to the computer vision community, and how developers can leverage OpenVINO to optimize for performance.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guest: Intel

Our guest this episode is Paula Ramos, AI Evangelist at Intel. Paula has worked in the computer vision field since the early 2000s. At Intel, Paula works to build and empower developer communities with Intel® AI Inference Software.

Podcast Topics

Paula answers our questions about:

  • (1:28) The importance of image segmentation to computer vision
  • (2:54) Traditional challenges building image segmentation solutions
  • (5:50) What value the Segment Anything Model (SAM) brings
  • (8:29) Business opportunities for image segmentation and SAMs
  • (11:43) The power of OpenVINO to image segmentation
  • (16:36) The future of OpenVINO and Segment Anything

Related Content

To learn more about image segmentation, read Segment Anything Model — Versatile by Itself and Faster by OpenVINO. For the latest innovations from Intel, follow them on X at @IntelAI and on LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to the IoT Chat, where we explore the latest technology advancements and trends. Today we’re talking about image segmentation with our good friend Paula Ramos from Intel. Paula, welcome back to the show.

Paula Ramos: Thank you, Christina. Thank you for having me here. I’m so excited to have this conversation with you.

Christina Cardoza: Yeah, absolutely. For those listeners who haven’t listened to any of your past episodes, like the one we recently did on artificial intelligence and looking at the next generation of solutions and technologies to come there, what can you tell us and them about what you do at Intel and what you’re working on these days?

Paula Ramos: Yes, this is so exciting. So, a little bit of my background, I have a PhD in computer vision and I’m working as an AI evangelist at Intel, bridging the gap between technology and users—so, users as developers, business leaders—so, people that want to have a specific journey in AI. I’m trying to make this technology closer to them so they kind of start the AI journey easily.

Christina Cardoza: Yeah, very exciting stuff. AI and computer vision: they are becoming hugely important across all different industries, transforming operations and everything from frontend to backend. So that’s where I wanted to start this conversation today, looking at the importance of image segmentation in computer vision and the opportunities that businesses get with this field.

Paula Ramos: That is really good, because I think that image segmentation is the most important task in computer vision. I would say there are multiple computer vision tasks—classification, object detection—but I think that image segmentation is playing a crucial role in computer vision because we can create object detection, recognition, and analysis there.

And maybe the question is, why this is so important? And the answer’s very simple: image segmentation helps us to isolate individual objects from the background or from other objects. We can localize important information, we can create some metrics around specific objects, we can extract some features that can also help us to understand one specific scenario. And this is really, really important in the computer vision land.

Christina Cardoza: Yeah, that’s great to hear Paula. And of course we have different technologies and advancements coming together that are making it easier for developers to add image segmentation to their solutions, and implement this and develop for this a little bit more seamlessly.

But before we get into some of the new technologies that are helping out, I’m curious: what have been the challenges previously with traditional image segmentation that developers face when building and deploying these types of solutions?

Paula Ramos: That is a perfect question for my background, because in the past I was working in agriculture, I already mentioned that. And I was working on my doctoral thesis with image segmentation and different techniques, and I was facing a lot of challenges with that, because we have multiple techniques to segment objects but there is no one-size-fits-all approach.

So we can see thresholding, edge detection, or region growing, but depending on what is the technique that we are using, we need to carefully define our best approach. I was working detecting coffee beans, and coffee beans are so similar, are so close. Maybe I have red colors around and I could see the over segmentation, merging objects, when I was running my image-segmentation algorithm. Or under segmentation: I was missing some fruits.

And that is a challenge related with data, because it is difficult to work in that environment when you are changing the light, when you have different kinds of camera resolution—basically you are moving the camera so you can see some blurry images or you can see noise in the images. And detecting the boundaries is also challenging.

And this is also part of the thing that we need to put in the map, or the challenges, for traditional image segmentation, is the scalability and the efficiency. Because depending on the resolution of the images or how large are the data sets, we can see that the computational cost will be higher, and that can limit the real-time application.

And this is a balance that you need to have—what is the image resolution that I want to put here to have a real-time application? And for sure if you reduce the resolution you limit the accuracy the most. And in most of the cases you need to have human intervention for these traditional methods. And I think that right now with the newest technologies in image segmentation I could have saved a lot of time in the past.

Christina Cardoza: Yeah, absolutely, I’m sure. It’s great to see these advancements happening, especially to help you further your career. But it also hurts a little bit that this could have made your life a lot easier years ago if you had this, but things are always moving so fast in this space, and that brings me to my next question.

I know Meta AI, they recently released this Segment Anything Model for being able to identify and segment those objects without prior training, making things a lot easier—like we were just talking about. So I’m curious what you think about this model and the value that it’s bringing to developers in this space.

Paula Ramos: Yes, I think that I would have liked to have Segment Anything Model seven years ago. Because all the problems that I was facing, I could have improved that with the model that Meta released last year, Segment Anything Model. So, basically improve the performance on the complexity so we can demonstrate with SAM that we have a strong performance on complex data sets. So that problem with noise, blurry images, low contrast is something that is in the past, with SAM. For sure, there are some limitations, and we have nothing in the image because the image is totally blurry—it is impossible for SAM to do that. So we need also to balance the limitations, but for sure we are improving the performance on those complexities.

Another good thing SAM has is the versatility and the prompt-based control. Unlike traditional methods that require specific techniques for different scenarios as I mentioned before, SAM has this versatility and it allows users to specify what they want to segment through prompts. And prompts could be point, boxes, or even natural language description.

I would love to say in the past, “Hey, I want to see just mature coffee beans” or “immature coffee beans,” and have this flexibility. And that could also empower developers to handle diverse segmentation tasks. I think that also I was talking about scalability and efficiency; so with SAM we can process the information faster than the traditional methods so we can make more sustainable these real-time applications, and the accuracy is also higher.

Christina Cardoza:  You mentioned the coffee bean example. Obviously coffee beans are very important—near and dear to probably a lot of people’s hearts listening to this—and it’s great to see how this model is coming in and helping being able to identify the coffee beans better and simpler so that we get high-quality coffee in the end result.

I’m curious what other types of business opportunities does this Segment Anything Model present that developers are able to—with still some limitations—but be able to streamline their development and their image segmentation?

Paula Ramos: I think that Segment Anything Model, from my perspective, presents several potential business opportunities, across all different image segmentation processes that we know until now. For example, we can create content or edit content in an easy way, trying to automatically manipulate the emails, remove some objects, or create some real-time special effects. Augmented reality or virtual reality is also one of the fields that is heavily impacted with SAM, with the real-time object detection. And also for this augmented reality, enabling the interactive experience with the virtual elements—this is one of the things.

So another thing is maybe—I’m thinking aloud—product segmentation, for example in retail. SAM can automatically segment product images in online stores, enabling more efficient product sales. Categorization based on the specific object features is also one of the topics. I can see also peak potential in robotics and automation. So, improve the object-detection part—how we can equip robots to use SAM to achieve a more precise object identification and manipulation in various tasks and autonomous vehicles, for sure. This is also something that I have in mind.

On the top of my mind—I also see that there are a lot of reserves around that—is how medical images and healthcare can improve the medical diagnosis because SAM has the potential to assist medical professionals in tasks like tumor segmentation, or leading accurate diagnosis. But for sure, and those are examples, I don’t want to say that those businesses will be solved with SAM; we have it as a potential application. I think that SAM is still under development, and we are still improving Segment Anything Model.

Christina Cardoza: Yeah, that’s a great point. And I love the examples and use cases you provided, because image segmentation—it just highlights that it is so important and, really, it’s one piece of a whole bigger puzzle and solution and making things really valuable for businesses. It’s a very important piece, and it is important to make some of these other capabilities or features happen for them within their industries and solutions. So, very interesting to hear all these different use cases and upcoming use cases.

You mentioned the limitations and things are still being worked on. I’m curious, because obviously you work at Intel and I know OpenVINO is an AI toolkit that you guys use to help performance and optimization. So how can developers make the best use of SAM and really be able to overcome those limitations with other resources out there, especially OpenVINO?

Paula Ramos: I think that one of the good things that we have right now in these AI trends is so many models are open source, and this is also the capability that we have with SAM, and also OpenVINO is open source, and developers can access this toolkit easily. And we already have good news for developers because we have SAM integrated with OpenVINO. What that means? So, we already have optimization pipelines for SAM in the OpenVINO Notebooks repository, and I think that this is great.

And so developers need also to know about this—maybe they already know about that—is that we have this repository where we are putting multiple AI trends every day. And this is great, because something happened in the AI field and two or three days after that we have the notebook there. So right now we have a set or series of SAM examples in the OpenVINO Notebooks repository—I think that you will have access to the URL and you can take a look at these and try this by your own.

The good thing is you don’t need to have a specific machine to run these notebooks; you can also run this on your laptop, and you can see the potential to have image segmentation with some models in your laptop. Basically we have a series of four notebooks right now: we have the Segment Anything Model, this is the most common. And a good resource that we have from OpenVINO is that you can compile the model and use OpenVINO directly, and also you can optimize the model using the neural network compression framework, NNCF. And this is a great example of how you can optimize your process also in constrained hardware resources because you can quantize these models as well.

Also, we have three more resources. We have Fast Segment Anything Model. So, basically we are taking this from the open source community; we are using that model that is addressing the limitation of Segment Anything Model. Segment Anything Model has a heavy transformer model, and that model substantially requires a lot of computational resources. We can solve the problem with the quantization, for sure, but Fast SAM decouples the Segment Anything task in two sequential stages. So it’s using YOLOv8, the segmentation part, to produce the segmentation part of the model.

We have also Efficient SAM in OpenVINO. Basically what we are doing here is we are using a lightweight SAM model that exhibits the SAM performance with largely reduced complexity. And the last resource that we have is something that was just posted in the repository, is Grounding DINO plus Sam, that is called Grounded SAM. And the idea is find the bonding boxes and at the same time segment everything in those bonding boxes.

And I invite the developers to visit that repository and visit those notebooks to understand better about, what is SAM? How SAM is working? And the most important thing also: how you can utilize OpenVINO to improve the performance in your own hardware.

Christina Cardoza: Of course, and we’ll make sure that we provide those links for anybody listening in the description so that you have them and you can learn more about them.

One thing that I love that you mentioned is obviously this space is constantly changing; so, something can change and you guys will address it and have it a couple days later. So it’s great, because developers can really stay on top of the latest trends and leverage the latest technologies.

We mentioned SAM: there’s still a lot of work to do obviously, and this is just the beginning and obviously we have all of these different resources coming from OpenVINO and from the Intel side. So I’m wondering if there’s anything else that you think of that developers can look forward to, or anything that you can mention of what we can expect with SAM moving forward, and how OpenVINO will continue to evolve along with the model.

Paula Ramos: So, for sure we’ll continue creating more examples and more notebooks for you. We have a talented group of engineers also working on that. We are also trying to create meaningful examples for you, proofs of concept that you can utilize also in your day by day. I think that OpenVINO is a great tool in that you can reduce the complexity of deep learning into applications. So if you have expertise in AI it’s also a great place to learn more about these AI trends and also understand how OpenVINO can improve your day-by-day routine. But if you are a new developer, or you are a developer but you are no expert in AI, it’s a great starting point as well, because you can see the examples that we have there and you can follow up every single cell in the Jupyter Notebooks.

I think that something that is important is that people need to understand, developers need to understand, is that we are working in our community: we are also looking for what is your need, what kind of things you want to do. Also, we are open to contributions. You can take a look at the OpenVINO Notebooks repository and see how you can contribute to the repository.

The second thing is also—and Christina you know about that—so, last December the AIPC was launched, and I think that this is a great opportunity to understand the capabilities that we can increase day by day, improving also the hardware that developers can utilize so we don’t need to have any specific hardware to run the latest AI trends. It is possible to run on your laptop and improve also your performance and your activities day by day.

So basically I just want to invite the people to stay tuned with all the things that Intel has and that Intel is having for you in the AI land, and this is really good for all of us. I was a start-up developer some years ago—I don’t want to say how long was that—but I think that for me it was really important to understand how AI was moving at that moment and understand the gaps in the industry and stay one step ahead of that, to show improvements and try to create new things—in that case at that moment for farmers, then also for developers.

Christina Cardoza: And I think that open source community that you mentioned, it’s so powerful, because developers get a chance to not only learn from other developers, ask questions, but they can also help contribute to projects and features and capabilities and really immerse themselves into this AI community. So I can’t wait to see what other use cases and what other features come out of SAMs and OpenVINO.

This has been a great conversation, Paula. Before we go, I just want to ask if there’s anything else you would want to add.

Paula Ramos: Yes, Christina. Just I wanted to say something before saying “Bye.” I wanted to say that we have a great opportunity—to developers, maybe if they are interested to participate in that. You can also contribute through the Google Summer of Code program that we have. We have around 20 proposals, and I think that also we can share the link with you, Christina, and developers can see how they can contribute and what kind of amazing projects we have around Intel, around OpenVINO.

And thank you for having me here! It’s always a pleasure to talk to you.

Christina Cardoza: Absolutely, thank you. Thank you for joining and for the insightful conversation. I can’t wait to see developers—hopefully they’re inspired from this conversation—they start working with SAMs and optimizing it with OpenVINO. So, appreciate all of the insights, again. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.