How Technology Is Reshaping the Future of Education

Related Content

To learn more about the future of EdTech, read Software-Defined AV Facilitates Hybrid Learning and The Future of Work Is Hybrid, Digital, and More Personal.

Transcript

Corporate Participants

Christina Cardoza
insight.tech – Associate Editorial Director

John Hulen
Crestron – Director of Channel Marketing for Education

Joe Jackson
QSC – Senior Manager of Education and Government Markets, Q-SYS

Presentation

(On screen: intro slide introducing the webinar topic and panelists)

Christina Cardoza: Hello, and welcome to the webinar on how technology is reshaping the future of education. I’m your moderator, Christina Cardoza, Associate Editorial Director of insight.tech, and here to talk more about this topic we have John Hulen from Crestron, and Joe Jackson from Q-SYS. So, before we jump into the conversation, let’s get to know our guests a bit more.

John, I’ll start with you. What can you tell us about Crestron and your role there?

John Hulen: Well, thanks for having me first, Christina. It’s great to be here. I have worked for Crestron for a little over 10 years. In my first role for nearly eight years, I was working with colleges, universities, even some K-12 districts in the Midwestern United States, and my current role is now on the messaging side for our whole education vertical, which includes the United States, Canada, and then globally. So, we have a team that works directly with colleges, universities, and schools to help them understand and implement Crestron technology.

Christina Cardoza: Great, well, I can’t wait to hear more about that. But, Joe, I’ll turn it over to you now. Welcome to the webinar.

Joe Jackson: Hi, I’m Joe Jackson. I’m the Senior Manager for Education and Government Markets for the Q-SYS division of QSC. I’ve been here about four years, but I’ve been in the industry a little over two decades, and I started out at Southern Methodist University as a manager of technology and implemented all this stuff. So, it was only a natural progression that I would go to a manufacturer and help the rest of the education environment. So, good to be here and thank you for having us today.

Christina Cardoza: Of course, you both have some very strong backgrounds to help us navigate through this topic today. So, before we jump into the conversation, let’s take a quick look at our agenda.

(On screen: slide outlining the webinar’s agenda)

Today, we’re going to talk about the state of education, how technology is playing a larger role, the benefits of EdTech for both teachers and students, different use cases you may not have expected EdTech to be applied to, as well as what we can expect from all of this in the future. So, let’s get started.

(On screen: slide on The State of Education with image of hands raised)

Here at insight.tech we’ve seen, over the last few years, the education center has massively transformed, and they’ve been under immense pressure to transform in order to adapt new technologies that support hybrid and remote learning, and these changes are bringing challenges but also new opportunities in the way we teach and learn.

So, John, I’d love to kick off this topic with you, looking at where we are today, how those changes have impacted the educational landscape.

John Hulen: Well, that’s a huge question. I would say that the instructional technology landscape has changed dramatically over the last few years. Whether it’s, when you go back a little ways, proliferation of personal devices, laptops, tablets, phones, everybody’s got them and carrying them around, but then some newer teaching methodologies, like the active learning spaces and flipped classrooms, the idea that you hear the lecture ahead of time, and then get in groups when you’re in class and go through the material. And then online education and remote learning was starting to emerge pre-COVID, in the pandemic, but then COVID happened, and to me, that was really an incredible catalyst to push these technologies forward. So, I’m not saying, really, everything started with COVID. The truth is it was way before that, but it really acted as a catalyst for that, and so we’re looking at things now like hybrid learning, like blended learning, and high-flex learning, and if you want, we can go into those types a little bit, but all kinds of new learning methods that the technology is required to have implemented and implemented well.

Christina Cardoza: (On screen: slide on The Rise of EdTech with an image of young student taking a virtual class on laptop)

Absolutely, and I agree this all was happening before COVID, but COVID forced everyone to adopt these technologies very quickly, and now that we had a chance to sit back and see how they’ve been working, we can be a little bit more thoughtful and purposeful about how we use these technologies, and you’ve mentioned a couple of them at the beginning, laptops, tablets. So, Joe, I’m wondering if you can expand on the type of technology we’re seeing in the classroom today, and how that’s improving education?

Joe Jackson: Sure, yes. I just want to expand on what John was saying about how COVID sped things up a bit. I mean, Zoom has been around for the better part of a decade. I remember installing it on my laptop when I was doing a bunch of H.323 stuff, a lot of distance learning with appliances and purpose-built things just for that specific discipline, but now it’s ubiquitous. Now you have cameras in the classroom. Now you have the ability to monitor things. Like a true IT-focused business would, you can monitor these things, and then you can offer it to communities that have never… Maybe I can’t get to a campus, maybe I don’t have the resources, but I have an internet connection, and I have a laptop, and now I can take classes online. So, I think it’s really broadened our approach to education now and become more inclusive, to be honest with you. EdTech is always going to push the envelope. I love giving technicians the ability to do crazy things with our stuff. They find building controls is one of those things that I think we’ll get into later, maybe, but the idea of EdTech being something that’s ubiquitous is really cool, and really why I love being in this industry. It’s quite awesome to see us push the envelope and see where we can go next.

Christina Cardoza: So, it’s been quite a few years since I’ve actually been in the classroom myself. I remember growing up and learning in school. We would have the projector come in, and that would be projected on to a whiteboard, or we would have a TV rolled in for movie day, and it took some time for the teachers to set this up and get it working properly. So, I’m wondering what the state of adoption has been, and how schools and teachers and students are getting acclimated with this new technology. John, I’ll turn that one over to you.

John Hulen: You know, it’s been an incredible transition and implementation of new technologies, what we’ve seen recently, but I will say we see a whole spectrum. We see schools that use educational technology still quite on a limited basis. Maybe they just use it to put their a projector in the classroom. Like you said, PCs, laptops, tablets, get plugged in, document cameras, and just used to help with visual aids, so to speak, for the course material.

As an example, some of the colleges and universities that we’ve seen are really implementing technology to push the cutting edge. There’s a school in Ohio that uses virtual and augmented reality in their medical school. There’s a technical school in Pennsylvania that is using both Crestron and Intel’s technology in their robotics. We’ve seen an R1 university, a top-level research university in Southern California, use our Virtual Control to be able to touch systems all around the campus. So, it still varies whether the school is really implementing the technology, or they’re waiting to see what happens.

Christina Cardoza: Absolutely, and some of those advanced technologies, you mentioned augmented reality, virtual reality, and even when it comes down to the laptops, you bring it home, sometimes the students are figuring out how to work on their own, or the teacher’s own, so they’re not necessarily in the classroom, learning all this stuff. So, Joe, I’m wondering what sort of support or training is available when schools adapt this technology, and how you get the staff and the students up to speed on it.

Joe Jackson: Well, we have a wonderful online training course as well as an person. So, when someone’s investing heavily in the Q-SYS ecosystem, we actually invest heavily in them, and we will bring training on campus. I find that in-person training is probably better to do at least once, twice a year with folks just so you can familiarize yourself with the on-campus tech, because there’s only so much you can see on a video. However, the continuing education part of it, we’ve developed three and four-hour blocks, so someone can log in on a Friday, let’s say, and if I’m your boss, I’m saying that’s part of your job. Once a week, twice a week maybe, sit there and log in and learn something new.

It’s also fun. We have some really cool trainers, and when we go out in person, some people scoff at that, and they’re like, hey, where’s Nate and Patrick. I know they make it fun, but there are a lot more smart people at our company as well, just other than the stars, Nate and Patrick, but they do a wonderful job of delivery. And we also have a student, actually one of my technicians got hired out of a school that’s local to me, he is going back in on Halloween, and he’s going to teach a three-hour course for an instructor that has a sound technician course, and one of the things that came up was Q-SYS, so we’re diving in and doing instruction at the universities themselves.

So, the online stuff is really cool. People love videos. If you just want to learn how to install a camera, it takes three minutes, watch a video, but if you really want to get deep, you want to go into UI creation, you want to go into coding in Lua, we have that as well, and we do a lot of one-on-one sessions with some of our top tier clients just to get them familiar with our product. So, training is multifaceted, it’s out there, and just contact us, and we’ll help you.

Christina Cardoza: (On screen: slide on Education for All with an image of a teacher teaching a class of students in desks)

And we touched a little bit on the aspect of bringing educational resources to areas that you may not be in, having more access to resources and experts around the world, and even in some areas where the education landscape doesn’t reach all of those transformations, they’re now being able to benefit a lot more, getting access to all of these different tools. So, I want to look at the benefits of EdTech, both from a student and teacher’s perspective. John, if you want to take that one.

John Hulen: Absolutely. I think from the student’s perspective, what we’re really talking about is different ways of learning, and really ingesting material to commit it to memory. So, there are students that – and like me, that I’m a visual learner, so the technology really helps reinforce that desire and preference of mine to learn through visual material. But the truth is, audio amplification being implemented as well, it can do everything to benefit students from better hearing a soft-spoken professor or instructor to going back to listen to the material, because they’re a little bit more auditory, and they want to learn through hearing the information several times in a row. So, now you have education technology that’s recording sessions and audio amplification, as well as even active learning. So, we’re seeing technology implemented when students help teach other students the course material. They get back together in the classroom, use the education technology to collaborate and to really understand what the material is. That’s on the student side.

On the instructor side, we’ve seen – especially with the COVID lockdowns, we’ve seen instructors go you know what, I can teach a lot of my course material from home or remotely, or I can record portions of it ahead of time, and let the students watch that maybe asynchronously, like not when the class is going on. So, we’ve seen the benefits to the instructors, everything from bad weather, a snow day that says, hey, let’s not just close school down, let’s just make it an online learning day, to the benefits of bringing guest speakers in. So, you have a medical school that wants to bring in a Chinese doctor, an expert on a certain field, or you have a business school that wants to hear from a leader of a company in Europe to understand their privacy issues versus maybe what the regulations are in the US. So, there are incredible benefits to implementing this technology. It also takes an aspect from the school and from the instructors to implement the technology in their pedagogy, in their material, so they really integrate and get the benefits out of implementing. So, it’s not just implementing technology for the sake of it. It’s implementing technology to get those extra benefits.

Christina Cardoza: One thing that I really love that you said is being able to allow students to rewatch the lessons or to learn on their own time, in their own speed. Education for a very long time has been teaching one way to all students, but not all students are the same, so this is really giving teachers and students the opportunity to learn and teach in a way that is most beneficial for them. One thing I want to touch on, though, is that there are still a lot of inequalities in the education landscape. I touched a little bit on them earlier about how not all areas around the globe have access to all of the educational resources that other areas have. So, Joe, I’m wondering if you can talk about some of those inequalities and how the use of EdTech is tackling those.

Joe Jackson: I mean, yes, I think I touched on it before about folks that may be, for lack of a better term, out in the boonies. They don’t have access to a large city campus or a community school. It is ubiquitous, it’s on your phone, it’s on your laptop, it’s on your iPad, so if you have access to one of those, or if you have a friend that has one of those, even for continuing studies, folks that are already out in the world, but I still want to go back and learn basket weaving, you can do that too. You can – well, you may not be able to do basket weaving on video, but bad example, right? But the pedagogy itself has just changed. Everyone has changed the way that they see the classroom.

Now, I remember back in the day, I couldn’t put cameras in the classroom. I was at a private school and no one would allow cameras in the school. I just wanted to watch the doors and make sure my equipment didn’t walk off. Now the cameras are in the classroom. They’re front and back, they’re facing everyone, we have microphones in the classroom, and those people can now project themselves out to the boonies if they want. So, the inequalities are shrinking, and I think that what you’re going to see, even though Harvard has a historic low 3.17-percentage entrance rate, there are a lot of other schools out there. There’s 5,500 schools. SNHU Online is a big one. Phoenix Online started it all, right? Anyone can get a hold of the information. I think what you’re seeing mostly with education is you’re a freshman in sophomore year, you want to be on campus. After that yes, juniors and seniors still want to be on campus to go to the football game, but most people just have busy lives now, and education is for all of us. It’s not just for the select few.

So, that’s what I would say, that just we’re going into that realm of education for all, and it says so on your slide. It really is for all.

Christina Cardoza: Well, you may not be able to do basket weaving online, but I’ve always wanted to learn to knit so it’s sounding more like I have no more excuses anymore. I can learn to knit from the comfort of my own home with on-demand access, so that may be something I have to look into.

Joe Jackson: Elon Musk has Neuralink, you’re going to be like Neo and just download Judo one day, so when Crestron or Q-SYS comes out with a WAP that can be implanted, watch out, because I always tell people, if you get an RJ45 to your cat, I can control it too, but now it’s a WAP. You don’t want wires.

Christina Cardoza: Well, it’s amazing to hear where this technology can go, and everything it’s doing right now. One thing that I’ve noticed is, in the past, technology companies and organizations have really competed against one another, but in this new modern world, better together is an ongoing theme. To get these all implemented, to be able to do this and benefit the schools, the teachers, the students, it’s really a collaborative effort. And I know, John, you mentioned Intel. I should mention insight.tech is an Intel-owned publication, but we always love to hear how companies are not only working with just Intel, but other partners in the ecosystem to make this happen.

So, if you guys could expand on the partnerships you have ongoing right now with Intel or anyone else you want to mention. Joe, I’ll start with you on this one.

Joe Jackson: Sure. Yes, I mean, we’re Intel Inside. It’s a – I tell people this all the time. I love the term AVIT but it’s so dated. It’s just IT and our DSP is just a DSP, but we are so much more. It’s a processor that uses an Intel chipset, based on a militarized version of Linux for the timing, and we just stack everything else on top of that. So, Linux is very important to us. Layer 3 Intel products, COTS, you’re going to hear that a lot, commercial off-the-shelf appliances, the virtual world is opening up, and I really do believe that software’s eating the world. We’re a software company. Q-SYS is a software platform, and we need people to understand that our platform is ubiquitous to anything that’s Layer 3 and OpenAPI. We love our partners, we love all of our manufacturers, because as you know, when you go into any classroom out there, if it’s dated a bit, you’re going to have seven manufacturers in there. Some of them will have three, others will have 20. So, if you’re not interoperable, and if you’re not going towards that smart AV platform that we’re pushing, you’re in the stone ages. So, move forward, embrace software, and that’s where I’ll leave that part.

John Hulen: Well, and for us for the last three years, I think Crestron’s been the MRS Gold Partner of Intel’s, which – Market Ready Solutions partner of Intel. So, we have used Intel processing and technology in our UC solutions, our unified communication solutions, and collaboration. Intel awards this Gold Partner status to people who implement the technology in other ways too. So, our cameras, the 1 Beyond camera hardware uses Intel, as well as our AV-over-IP solution uses Intel processing, and there’s a specific product called the D80, which uses the Intel Open processing solution – excuse me, Open platform solution so that actually it makes – it’s a device that slides right into a display, making the display an endpoint for audio, video and control over IP. So, there’s a ton of different ways we partner with Intel, and it’s been incredible, especially over the last three to five years, where this technology is proliferating everywhere.

Christina Cardoza: Absolutely, and now that we’ve learned about some of the benefits and the technologies that go into all of this, I’d love to hear more about some of the examples that you guys already provided.

(On screen: slide on The Classroom and Beyond with an image of students working in a hallway)

I know I recently was reading an article on insight.tech, and one of the courses a university in China was trying to teach was an intro to the Olympics, and so it was an online virtual course, but they brought the classroom to the outside, or to the ice rink, and had Olympic professionals teaching the course from the ice floor. So, I’m very interested to see how else this is transforming the teaching plans and landscape, and where else, even beyond education, we can take some of these technologies and transform them even further. John, I’ll start with you on that one.

John Hulen: Sure. I guess there’s too many to name right now. Initially, I guess I would say, we have dozens of case studies on our website about – and you can actually sort just by education, and so it’s all about different institutions implementing Crestron technology all over campus.

A couple of quick examples to give is, we’re about to… I think I can say this. We’re about to publish a video case study on University of North Carolina Greensboro, their brand new Esports facility, and that has become a huge, important type of learning and playing in college and universities, and schools around the world now, as well as University of Michigan and Ford collaborated on an engineering building. And what’s so compelling about that is, I think the top two floors are for Ford and run by Ford, and they’re doing AI research and development in self-driving technology, and the bottom two floors are robotics, and the University of Michigan grad students are learning about developing robotics and programming for them, and so on. So, whether it’s from examples like medical schools, in VR labs, or mixed media labs, we have a university in Connecticut that has… They started with their Esports program, but that ended up driving funding, both federal and private funding, to fund a new cybersecurity range, as well as a mobile STEM lab. That is a case study you can find on our website about Central Connecticut State University.

There are really just so many compelling new learning environments, and now we call them learning spaces, not even classrooms, and so… And actually, one other item that you’d find there is an article I wrote about AV everywhere on campus, just the idea that this technology, how Intel’s implemented into Crestron solutions, but it’s no longer just the classroom. You have huddle spaces, and meeting rooms, and shoot, you have athlete study rooms now in the athletic facilities, and divisible rooms in the event centers, which are revenue-generating spaces. So, I mean, hopefully, that’s more than you needed of examples, but we have a lot of case studies on it.

Christina Cardoza: I love all the Esports stuff. I can’t wait till it becomes a little bit more mainstream. My children are young, my oldest is only in kindergarten right now, but someone at the bus stop asked me the other day what sports I was planning on putting him in and Esport was my answer. So, really waiting, hoping that comes soon to my school.

Joe, are there any other customer or real-world examples you can share with us, and how the opportunities exist beyond the classroom?

Joe Jackson: Yes, I mean, near and dear to my heart is Esports, and I’ve seen a lot of Esports arenas pop up around the last decade or so. I tried to put one at SMU years ago and they said, “What? You’re you doing what?” I’m like, well, there’s more people that watch video games online than watch the Super Bowl sometimes, so. You know, Fortnite had a concert in a park and 10 million people showed up, so Esports is really near and dear to my heart, especially as a company. We have a cinema division. Everyone knows this in the AMC theaters and Atmos and those types of things, but we have partnerships with Epic Games, NC State that houses Epic Games’s program there. We work with Netflix. We work with a ton of folks that have that immersive audio sound because most people know QSC as an audio company, but we’re so much more than that.

The other thing that I want to touch on too is – and Esports is great, I love it. My son is into it, I’m into it. I’m trying to build an Atmos theater upstairs just so I can take the headphones off and get into the game. So, when you talk about gaming, I really tend to go the creative route when I talk about gaming. But building controls, like I said earlier, I mean, there are so many different things that our processors can be used for. We had a client that had someone keep leaving a freezer door open, so they put a sensor on it, and they had Reflect tell them, hey, go close the freezer door. Companies like Johnson Control are building things like OpenBlue platform, like Digital Twin. The military is using AI, and again, that immersive feeling of being in a video game to train their soldiers. So, there’s so much more that this platform can do, and again, I tried to tell people, AVIT really is a cool term, but it really is just IT, and we are the AV geeks. So, I hope we continue to push the envelope into what’s possible, and universities and government is just my favorite because they do tend to like to push the envelope a bit, so.

Christina Cardoza: You’ll have to invite us over for a LAN party once your Atmos theater is all complete.

Joe Jackson: I have a gig internet connection, and as a kid, trust me, I remember the LAN parties and somebody had one-meg, playing Doom, so.

Christina Cardoza: So, we covered a lot in this webinar, and I want to look toward the future a little bit.

(On screen: slide on The Future of Education with image of young students in classroom using tablets)

If you guys can look into your crystal balls, and any predictions you have on how these new technologies are going to continue to expand, and bring us new use cases in the future, and how also they are preparing our students for the future. John, you want to take that one first?

John Hulen: Sure. You know, there are so many ways, we named a lot already, but I guess I’ll start with this more fundamental truth, that 21st-century students are not expecting the same experience that their parents had, and we… I heard someone say the other day, which I really liked, and it was at an educational conference, and they said we are digital natives trying to – excuse me, we are digital immigrants trying to teach digital natives with analog tools, and I was like, oh, that’s perfect. It really is a big hill for us to climb to understand the perspective of these students.

I thought Joe brought up a great point that reminded me earlier, when he said students sometimes want to be in the classroom, sometimes they don’t. What I’ve noticed, I have a 15-year-old and a 12-year-old and I’ve noticed that their idea of social is being connected through the game, for instance, or being online together they almost consider the same as being in the same room. Or when classes have started back after some of the lockdowns, they were more interested in the social aspect and being together than they really were necessarily being in the classroom. What you’ll end up seeing is students watching the class material together, maybe in a shared student recreation area, or a student learning space, rather than necessarily going, but that’s what they consider social. So, I feel like beyond the classroom is just that. It is everywhere else, learning needs to take place.

I think of students who have dependents of their own at home, and being able to still get that high school or college degree. They’ve been craving to be maybe a first-generation graduate, or to expand their knowledge and capabilities. I mean, now we’re talking about micro-certifications for jobs and nano-certifications, where you’re learning on the job constantly, and that’s an expectation from employers. So, beyond the classroom is just about every aspect of our lives.

Christina Cardoza: I definitely see that at home with my own kids, and they’re learning. We have various tablets, where it’s teaching them how to read through all of these interactive games, teaching them how to trace, and then even with some of the older technologies we have at home, they’re very confused by it. They go to my laptop screen, and they start pressing it, expecting things to work, and then they go in the classroom and they sit there at the chalkboard, and they have no idea what they’re looking at or what this is. So, I think I absolutely agree. In the future they’re going to be using these technologies for work, for play, so why not bring it into the classroom?

Joe, is there anything you want to add about how these new technologies are providing new opportunities for the future, and for the students of the future?

Joe Jackson: Absolutely. I think the future of education really should be based around user experience, and what I mean by that is we as manufacturers have to look through to who is the end user? Well, is that the technician that installs it and maintains it? Is that the professor that delivers the information on the media, or is that the person that is actually sitting there absorbing the information? I argue it’s all three, and four, and there are so many aspects that you have to get right. A break-fix team now is implemented in most of the larger schools. Some of the smaller schools still have to rely on integration partners, but more and more what I see the future of education is a lot more educators taking more control over how they deliver that education experience, and it really is all about the user experience, and again, ubiquitous, learn from anywhere. You’re on vacation, and you want to learn how to scuba dive, you could probably watch a video. I was taught by a couple of crazy Australians on the way out to the reef. I don’t suggest you learn that quickly, and get thrown into the pool, but if you wanted to, you could, if you’re that type of learner.

We have to understand how do people learn. Do you need to be in class? Then let’s come to class. But now we have the ability to split that class in half based on the needs of the user. So, if the user can learn through video and flip the classroom, and then come to school for instruction, then let the person learn that way. We can’t pigeonhole folks into the same type of user experience. So, I would say we, just as manufacturers, what we need to do is… The main thing that we do, I think, as a company is we go and listen to the end users, and that means all of the stakeholders, not just the students. Because it is about the students, but it’s also about the delivery, and the maintenance, and the break-fix of the technology, so we have to keep that in mind as manufacturers, to keep pushing the envelope and making things a little bit easier for people to do, because it is technology, and that’s what we have technology for, is to make things easier, not harder.

Christina Cardoza: Absolutely, and I have to admit, I’m a little jealous about all of these technologies and tools that my kids are going to be able to grow up with, but as we discussed throughout this entire conversation, anybody can be a student now from anywhere and any topic. So, I’m excited to dig into some more learning myself over the next couple of years.

Unfortunately, we are running out of time today, and I’m sure we can continue to talk about this for hours, but before we go, are there any final key thoughts or takeaways you want to leave our attendees with today? Joe, we’ll start with you.

Joe Jackson: Just keep learning, folks. That’s why we’re here. I’m passionate about bringing technology to folks not only for learning, but for the listening and for the visuals. And yes, reach out to me or my team, if you guys have any questions about our technology, and yes, great talking with you guys.

Christina Cardoza: Absolutely, and John, any last remarks?

John Hulen: Yes, I feel like we… The technology allows the students to be prepared in so many different ways, and I think my biggest hope and passion, and dream is that the AV departments, audiovisual departments, that used to be relegated to the basements, and some still are, and as a retrofit, even on a new project, that they’re elevated to the point where they get a seat at the table in design, and the UX, the user experience, like Joe was mentioning. This technology has gone from being a roll-a-cart into a room to integrated into both the network, the IT side, as well as the architectural side, and even building and lighting control. There’s so much, so I really hope that if there are C levels, there are Deans or provosts that listen to the webinar today, that they take a second to think about what considerations should I have when the school architect’s thinking about a brand-new business building or a medical building, or a brand-new classroom or learning space, or a lab, and get those designers who care about that user experience at the forefront and a seat at the table.

Christina Cardoza: Great. Well, with that, I just want to thank you both for joining the webinar today. It’s been a very insightful conversation, and we’ll have to be sure to follow back up and follow you guys as this landscape continues to evolve, and see all the great things and the great works that Crestron and QSC continue to do.

I also want to thank our audience for listening today. If you’d like to learn more about the future of education and EdTech, I invite you all to visit insight.tech where we have a wealth of podcasts and articles on the subject, as well as Crestron and QSC. Until next time, I’m Christina Cardoza with insight.tech.

(On screen: Thank you slide with URL insight.tech)

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

Smart Cities Technology Open New Doors and Possibilities

It’s no longer a question of whether we should make our cities “smart,” but it’s now a matter of when and how. Rapid advances in AI, deep learning, and edge computing have the potential to improve our quality of life beyond what was achievable just a decade ago.

And these new smart city technologies aren’t just designed to make citizen life more enjoyable; they’re making it safer, too—and for a fraction of the cost.

Automated Traffic Incident Detection

The Intel® Distribution of OpenVINO Toolkit helps traffic operators reduce the number of accidents on the road.

For instance, in 2004 after a spurt of traffic accidents, European Union member countries issued minimum road safety requirements for tunnels over 500 meters long. The directive included installation of safety cameras, with the goal of monitoring events like wrong-way drivers, smoke/fire, stopped vehicles, and pedestrians on the roadway.

But the sheer quantity of traffic footage streaming from all those endpoints (potentially hundreds of cameras per tunnel) made manual analysis impossible. In fact, even traditional computer vision camera monitoring couldn’t identify people, objects, and events with enough accuracy to avoid false alarms.

Instead, traffic operators needed AI solutions at the edge to automate traffic incident detection and keep traffic flowing safely.

TRAFFIX.AI, a software solution from mobility analytics company Sprinx uses OpenVINO to run neural networks that do just that. Thanks to superior edge processing, the solution analyzes video feeds and detects everything from wrong-way drivers, slowdowns, and even smoke or fog—in real time.

Enhancing the Spectator Experience

And it’s not just on the roads. Venue organizers use IoT smart sensors and cloud-based technologies to ensure the safety of concertgoers and sporting-event attendees.

With AI platforms like OpenVINO, venue operators can gain accurate, real-time people counts to manage crowds effectively. When facility managers can see where people are gathering at any given time, they can respond quickly to emergency situations. And by analyzing historical data, venues can reconfigure seating, concessioners, and entry points to prevent incidents from happening in the first place.

One solution from IOTech Systems, an edge software provider for the IIoT, leverages the latest Intel technologies around GPU and CPU architectures and the OpenVINO Toolkit to get real-time, actionable crowd data to the people who need it. Organizers can make fast decisions by accessing a simple dashboard from their phone or tablet. And attendees can view open seats on large monitors.

The result? A better experience for spectators, and reduced costs for venues.

AI-Based Video Analytics for Smart City Technology

The use cases for AI-powered video analytics don’t end there. By turning existing CCTV cameras into cost-effective smart sensors, these edge-to-cloud systems can also improve building security or help truckers find a safe place to rest.

AI-based vision systems like the one from Uncanny Vision, an AI video analytics solutions provider, depend on OpenVINO and small embedded processors for lightning-fast processing speeds. And it’s that speed that allows the system to watch and analyze video and inform operators in real time when action is needed. This way, human resources are spared the labor-intensive work of reviewing video streams themselves—and the inevitable errors that follow.

But the best part? False alarms are a rare event. Unlike first-generation sensors that couldn’t tell the difference between a person and an animal, Uncanny’s systems are guaranteed to be 95% accurate. That’s orders of magnitude better than even traditional computer vision systems.

#SmartCities equipped with new #IoT smart sensors and #AI #technology means endless possibilities in creating safe and sustainable cities. @Inteliot via @insightdottech

What’s more, customers and systems integrators don’t need any special knowledge of AI to install the Uncanny system. All they need to do is connect a CCTV camera to one of Uncanny’s Intel processor-based smart boxes and the system is up and running.

AI in Smart Cities Protects Privacy

Importantly, turning security cameras into IoT smart sensors in this way doesn’t breach personal privacy. On the contrary, AI software processes video data at the edge—without recording, storing, streaming, or sharing images over the cloud.

Because the images themselves aren’t needed, all that matters is the software’s ability to identify dangerous security events and escalate them as soon as they happen.

SensingFeeling’s SensorMAX platform does this by using pre-trained models and behavioral analytics to assign a risk index to every camera. Then only potentially risky scenes are displayed to operators—an unaccompanied child in a train station, say, or a crowd gathering in an unusual place. And rather than wasting time scrolling through uneventful camera feeds, the operators are available to respond immediately when they need to.

The SensorMAX platform’s architecture is based on the OpenVINO Toolkit and has a variety of uses beyond smart city applications, including reducing oil and gas accidents.

But improving the spectator experience and keeping people safe are just some of the ways AI is used in smart cities. The potential for innovation is huge, and cities and businesses already capitalize on it.

Smart cities equipped with new IoT smart sensors and AI technology means endless possibilities in creating safe and sustainable cities. See what else AI can do, and start creating your own applications by checking out the Intel Edge AI Certification Program or taking the 30-Day Dev Challenge.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Fixed-to-Mobile, AI-Powered Video Analytics for Buildings

Smart buildings deliver continuous streams of data from sensors, cameras, and other devices to optimize operations and respond to user needs. With AI-powered video analytics, building operators can easily process this data. But these building solutions are usually within permanent structures supporting fixed surveillance systems. Many locations, including construction sites, temporary work sites, parks, event venues, and other in-field settings, need mobile surveillance units because they are typically not equipped with video cameras or analytics.

One company facing this situation turned to Megh Computing, a video analytics solution provider, about a Mobile Surveillance Unit (MSU). It needed an advanced, cost-optimized, AI-based video analytics solution to monitor 5,000 mobile sites, all on solar power.

Existing MSUs typically use expensive “smart cameras” with analytics that detect intrusion. Since the AI models used in these devices are not very robust, they generate many false positives, which increase operational costs in monitoring the alarms and the bandwidth required for the backhaul.

Megh Computing introduced a new MSU, with an Intel® Core i3 NUC processing streams from four standard cameras on the unit. The system uses advanced deep learning models based on an Intel® Distribution of OpenVINO Toolkit-supported inference engine for detection of people and objects. Prabhat K. Gupta, CEO of Megh Computing, says, “Since these models deliver high operational reliability, Megh is able to virtually eliminate incidents of false positives, reducing operational costs for monitoring.”

The MSU’s video analytics pipeline is highly optimized and controlled by a motion filter that reduces overall power consumption for the system to less than the required 15W. This system also generates savings from lower communication and camera costs, resulting in compelling ROI.

“You want to be able to monitor any situation or crowd to see what’s happening. With an MSU, you can quickly roll the thing up, put it up, and start monitoring in conjunction with security staff,” Gupta says.

Companies that invest in costly #video systems want to be able to analyze streaming #data and gain business #insights from it. @meghcomputing via @insightdottech

Customized Smart Building Solutions

Megh Computing leveraged its expertise with its VAS (Video Analytics Solution) for Smart Buildings to create the mobile option for its customers. VAS is a highly customizable solution that can target different hardware architectures and can be deployed from the edge to the cloud. VAS processes data from both cameras and sensors. It has been deployed in a variety of environments, including smart cities, smart warehouses, and retail locations.

VAS applies AI deep learning to accurately detect people and objects, and also anomalies in human behavior, to reduce physical risks. This was an important aspect to bring to its MSU solution.

“Obviously, there’s an explosion of data at the edge, and most of the data is video,” says Gupta. “And, except for data used to respond to security incidents, most of it is never analyzed.”

Companies that invest in costly video systems want to be able to analyze streaming data and gain business insights from it. For instance, how many people use a company’s spaces at any given time, and for what purpose? The answers can lead to operational improvements, such as better traffic flow and smarter security controls. Think of a retail kiosk. By monitoring the space, Megh’s solution can determine how long people wait in line, so the retailer can add another kiosk to avoid lost sales.

“We can also monitor space usage,” says Gupta. “We can see how people are spending time in work areas throughout the day. That helps with facility planning, for example.”

Reduce False Alarms with AI-Based Security Solutions

Besides offering customization and mobility, Megh Computing addresses a common problem with video analytics: the high rate of false alarms. A security firm turned to Megh when it was experiencing this problem.

After piloting Megh’s solutions side by side with their existing platform, the company made the switch. “The operator said we were basically able to eliminate all of their false positives,” Gupta explains.

Megh Computing minimizes false alarms through the use of its advanced AI deep learning models and its ability to continuously update these models for improved accuracy, using its continuous training framework. “This is a unique capability that assures that when the system issues an alert, the person who receives it can feel confident it’s real and take appropriate action to prevent a security incident,” says Duncan Moss, Principal Engineer at Megh Computing.

For instance, a car dealership uses the platform for after-hours monitoring. If someone acts suspiciously on the lot, intelligent cameras can guess the person’s intent by tracking their movements and alerting a security guard. An individual who moves from car to car, stopping for a few seconds by each vehicle, may plan to break in and steal a car. “That’s an example of how we are using a technology for behavioral analysis and prevention,” Moss says.

End-to-End Advanced Security Solutions

Megh Computing not only looks at situational monitoring of spaces but aims to address cybersecurity in its solutions. “People who are trying to break in can break in physically or electronically. If you want to provide complete security for an enterprise, you’ve got to look at both aspects,” Megh CEO Gupta says.

For example, VAS addresses the cyber aspect by monitoring network packets for congestion and signs of distributed denial of service (DDoS) activity. Thus, Megh provides a single platform for both physical security and cyber security threats.

To deliver all these capabilities, Megh Computing partners with Intel for its advanced AI hardware and software platforms. Megh’s solutions are certified as Intel Market Ready.

Going forward, Megh plans to continue leveraging the Intel relationship, not just for technology but also to widen its market reach. Having gained traction in smart buildings, Gupta says the company now targets retail and smart warehouses, too.

As it does, Megh will continue to look for more opportunities to deploy MSUs. After all, security and safety shouldn’t be confined to buildings; it’s needed wherever there is risk. 

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

AI-Driven Retail Video Analytics Spurs Sales

It’s always a good sign for businesses when their local communities and cities continue to grow in population. Larger populations usually mean an expected increase in sales. But for one local salvage grocery retailer in Arlington, Texas, the desired growth wasn’t happening in their case.

Smaller grocery retailers like Town Talk face competition from multiple avenues. Slim margins and labor challenges compound problems. To figure out why sales weren’t improving, Town Talk needed information—about demographics, visitor foot traffic, and peak hours—so it could optimize product mix, placement, and prices.

Starved for Deeper Insights

Brick-and-mortar retailers like Town Talk are at a disadvantage compared to their online counterparts, according to Pankaj Kumar, Founder and CEO of PM AM, a global information technology company that provides custom solutions to its clients. E-retailers can slice and dice customer demographics and preferences in real time, deliver personalized recommendations, and increase sales. “But the moment it comes to brick and mortar, we hit a black box,” Kumar says. “There’s near-zero visibility into customer demographics or store layout and its rewards.”

While consumer research and marketing are known entities in all retail, many of today’s brick-and-mortar stores function on limited visibility as data and analytics aren’t always available, Kumar explains.

A lack of useful operational insights is especially worrisome in grocery retail, where razor-thin profit margins can have a tremendous boost with a mere 1-3% margin increase. Stores like Town Talk need sales volume to make up for slim returns.

To address these challenges, Town Talk’s turned to PM AM to deploy i3Di, its artificial intelligence-based video analytics platform, at the retailer’s Arlington, 36,000 sq. ft location. Twenty cameras positioned strategically around the store—signage notified shoppers about video monitoring—streamed live video feeds, which i3Di processed to deliver near-real-time insights about shopper behavior, customer demographics, and traffic patterns. Town Talk acted on these insights and made more informed decisions about staffing, product mix, and merchandising.

By using #retail #VideoAnalytics, management can track which customer journey maps in the store, strategically positioning products at the right location at the right time for targeted sales and increased revenue. PM AM via @insightdottech

Video Analytics Deliver a Smorgasbord of Advantages

Video analytics can level the playing field for brick-and-mortar locations by giving retailers like Town Talk valuable intelligence about the who, what, where, and when of their customers, Kumar says.

For example, Town Talk’s market research had forecasted that women and baby boomers comprised a majority of their shoppers. Video analytics proved otherwise: The customers at the Arlington location turned out to be evenly divided between men and women, and a majority of shoppers were millennials.

Every retailer worries about leakage—missed opportunities for an upsell. By using retail video analytics, management can track which customer journey maps in the store, strategically positioning products at the right location at the right time for targeted sales and increased revenue. Town Talk used PM AM’s i3Di to identify the three most-visited areas of the store and optimized merchandising accordingly.

At a time when retail is struggling with labor shortages, knowing peak traffic hours through video analytics helps store managers plan for staffing, allotting more workers during high-volume periods.

Town Talk also used video analytics to determine when inventory was running low on shelves so the store could accelerate the restocking process. These combined factors led to more products being sold faster and allowed Town Talk to also forecast product demand.

The Ingredients in the Video Analytics Recipe

PM AM’s i3Di solution comprises four components:

  • Cameras: These can be closed circuit television (CCTV) or smart cameras (Intel® RealSense D455 Depth cameras with integrated compute models)
  • Edge computing using Intel GPUs and VPUs
  • PM AM’s proprietary AI algorithm
  • PM AM’s proprietary business intelligence (BI) platform that delivers insights

PM AM works with system integrators and in Intel’s ecosystem to deliver functionality that its customers want, Kumar says. The company uses Intel® DevCloud to test a range of Intel hardware and to optimize AI models for specific edge computing requirements. PM AM uses the Intel® Distribution of OpenVINO Toolkit and its inference models to identify shoppers’ ages, gender, moods, and movement patterns without revealing personal identifiable information (PII) that might cause security concerns.

Retailers can start with the camera infrastructure they have in place to access basic insights such as foot traffic and demographic makeup. To access more specialty information such as outages, stores will need to install advanced smart cameras.

Increasing Appetite For AI-Driven Analytics

AI video analytics in increasing granularity can find additional implementations in being able to test-drive new-product launches in stores, analyzing basket sizes at checkout kiosks, and leveraging beacon technology to deliver customized offers to customers in-store. Kumar says mall management can use the same video analytics platform to optimize placement of retailers within their site so net sales increase. Being able to tell which aspects of products resonate (or not) is valuable to retailers. And increasing numbers of them are willing to pay for such brick-and-mortar intelligence, Kumar says.

Kumar is excited about the growth prospects of AI: Recent predictions show a compounded rate of 35% until 2025. “What I am also hearing is that people don’t want to work with 17 different vendors to achieve 17 different outcomes,” Kumar says. “That’s where we come in; we deliver a holistic solution that delivers easy-to-understand business intelligence for C-suite and operational staff.”

The proof is in the pudding. Retail video analytics helped Town Talk reach its 2021 revenue goals two months early. The retailer plans on using this technology in its other locations as well.

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Surgery Goes to the Next Dimension: Bringing AR to the OR

It seems like something out of science fiction. Doctors are using augmented reality—a technology more commonly associated with gaming—to guide them through delicate surgical procedures with a minimum of invasion and a maximum of information. This alliance, among technologies and among people, is part of an incredible transformation of healthcare in recent years that includes telemedicine and remote monitoring.

AR-assisted surgery is not the only potential benefit of the cooperation between medicine and augmented reality. Shae Fetters, Vice President of Sales at Novarad, a provider of advanced healthcare solutions; and Dr. Michael Karsy, Assistant Professor of Neurosurgery, Neuro-oncology, and Skull Base at the University of Utah, can speak with experience and expertise from both sides of the coin. They’ll discuss the advantages to patients and practitioners, the partnerships that make it all happen, and the future possibilities for this new vision of healthcare.

How is augmented reality transforming the healthcare space?

Shae Fetters: I think we’re really at the forefront of this evolution, and we don’t understand in full what the possibilities are at this point. What we’re not doing is looking to replace what doctors do. What we are doing is taking medical imaging—simple X-rays, CT scans, and MRIs—and merging them with AR to give doctors extra information that can make procedures go more smoothly. That could be a highly complex surgery, or something routine that might be safer with imaging on hand.

It can also be used in patient education. We find that a lot of patients are worried about what’s going to happen during a procedure, though doctors do the best they can to explain what’s in their heads and what’s in our complex bodies. Augmented reality helps patients see exactly what it is surgeons are talking about. That helps them feel more comfortable and more a part of their care. And if patients understand those things and feel comfortable with them, that helps get better results all around.

We also see it used in the training of new physicians, both in surgery and clinics. It substantially increases the effectiveness and the speed of learning. And that’s really just touching the surface of what the current possibilities are for AR in the healthcare space.

“What we are doing is taking #medical imaging—simple X-rays, CT scans, and MRIs—and merging them with #AR to give #doctors extra information that can make procedures go more smoothly.” – Shae Fetters, @NovaradCorp via @insightdottech

Can you talk about the use of augmented reality in the operating room?

Dr. Michael Karsy: Image guidance has been around for quite a long time in neurosurgery, and it’s now essentially standard of care—where we use a guidance system to allow patient DICOM images, MRIs, or CT scans to be used intraoperatively to help with the surgery. And as I was looking around to see where augmented reality was at in the healthcare space, Novarad was way ahead of everybody else. It had a technology that was FDA cleared for preoperative planning, as well as for intraoperative use for spine instrumentation.

I thought this technology could be the next step in imaging for patient care, and I thought it would be interesting to work on it with this group. I could already see the benefits of navigation for making spinal surgery minimally invasive, or in cranial applications to really home in on complex anatomy.

Shae also alluded to the fact that most surgeons develop in their mind’s eye a kind of X-ray vision of patient anatomy and what they expect to see in surgery. But that takes years to develop. I thought that with augmented reality you could essentially take these holograms and DICOM images, and apply them directly to a patient right in front of you. The software that Novarad has allowed me to install here at the university enables me to upload images to their cloud-based system, generate hologram images, and then download them directly to a headset just with Wi-Fi.

He also mentioned that we are really just at the beginning of how augmented reality is going to be used in healthcare. We don’t yet know the best applications for it; we don’t know exactly where it will add specific value for specific cases because this technology is so new. And it’s really exciting to work on it and see where it’s going to go.

How is Novarad making AR-guided surgery possible?

Shae Fetters: The company as a whole has been dealing with medical images and providing picture archival systems for hospitals for 30-plus years. We’ve now been able to change what the monitor is for doctors by condensing the images down to headsets.

Our solution, VisAR, has been the result of seven to eight years of trial and error, but we’ve got it to the point where you can look at a simple QR code and it downloads your patient’s study onto a headset in about 20 seconds. At this point, the whole process to download, to calibrate the lens, to let it learn the space it’s working in, and then to match the images to the patient’s anatomy accurately takes about two and a half minutes.

And working with Dr. Karsy we’re continuing to figure out better workflows so that it isn’t scary and complex for surgeons and different hospital staff; so it’s not a big deal when the doctor says before a case or partway through a case, “Grab my headset. We’re going to guide this with VisAR.” It’s not quite to the point where it’s absolutely amazing, but it’s still pretty mind-blowing to do this without needing multiple pieces of huge equipment in the OR.

What is the value for Novarad of working on this technology with partners like Intel?

Shae Fetters: Novarad is a fairly small company by industry standards, so it’s only been possible to produce this type of technology through our different partnerships. Microsoft has an amazing headset that they continue to develop: the HoloLens 1 and now the HoloLens 2. Because for us to really spend the resources and the time to develop the headset on top of all the software—well, I don’t think we could ever keep up with Microsoft. The inside parts of that headset are the chips by Intel. We don’t produce microprocessors; that’s not what we do. So it’s great to collaborate with companies that also have some great innovative minds and some amazing talent.

We also couldn’t have done this without brilliant minds like Dr. Karsy’s. And there are a lot of doctors across the country who are excited about this and want to share their different ideas. It’s very much an open and collaborative network that we’re working with. There isn’t competition between us, and there isn’t competition between the surgeons either; they want what’s best for patient care and for moving healthcare forward. It’s not a selfish venture.

Dr. Michael Karsy: From the clinical perspective, working with industry has been great; there’s no way one single academic center is going to develop 30 years of experience with PACS and imaging and DICOMs like Novarad has—and then develop augmented reality on top of it.

We really focus on what we do well, which is the surgery part and the clinical part and the management of that; and then we partner with a group that has expertise in radiology and augmented reality. That’s the way to innovate, and I think we’ve had really good success doing it that way.

What hopes do you have for AI-guided surgery going forward?

Dr. Michael Karsy: We’ve mentioned the Intel chips—and as they become stronger, have better processing speed, and are able to handle more operations they generate new technology that can do more, too. That’s kind of the same thing we’re seeing in healthcare: As soon as we have new technology—a better HoloLens, a better augmented reality system, software that works faster—it changes everything you can do. Now you have the ability to go back to procedures that have been done the same old way for decades, and you can do them differently: you can do them minimally invasively with smaller incisions, and you can get more accurate.

Imaging technology has been widely used in neurosurgery, and we can see how navigation has changed the way we do things. Then other kinds of surgeons come into our wards, see our navigation systems, and they’re wowed by it; such a thing doesn’t exist in many other surgical fields. I think AR will change all that. I think you’re going to start to see image guidance and minimally invasive procedures enter into other surgical subspecialties and spaces in healthcare where it didn’t exist before, because we just didn’t have a way or need to do it.

If there is a day where I don’t have to do another brain surgery that would be a great day, because it would mean that we’ve developed enough medical therapy and radiation and other things that surgery isn’t necessary. It will take a long time to do that. But that’s the ultimate goal of this—to reduce the harm to patients as much as possible. Technology drives the way forward for that, one hundred percent.

How do you see Novarad as part of that future?

Shae Fetters: If you think about X-ray and CT and MRI, and how we can put people in a little machine for a couple of minutes and then tell what’s going on inside them without needing to open them up—that’s a pretty revolutionary thing. We see AR as a similar move.

We really do see this as something that, in the next 10 years or so, is just going to be a standard of care where every surgeon, every doctor has their own headset and it’s a core part of how they treat patients and how they interact with them.

Any final thoughts?

Dr. Michael Karsy: AR technology is one of those things that the more you learn about it, the more you’re going to start to envision where else it could be applied. In healthcare there’s still so much that’s unknown, and to have a new technology that allows you to come up with new discoveries and implement them—that’s great. I think that’s the way things should be going forward, and I’m really excited to see this continue to develop in my career.

Shae Fetters: It can be difficult to find the right words to explain this technology to people. But I put a headset on someone’s head, and within 30 seconds to a minute I see their eyes open wide and they understand; they start to see the potential out there. So I encourage any physicians, any big healthcare systems that are looking to improve their patient care to reach out. It’s worth a conversation on how we can collaborate to provide better care for all patients in our communities.

Related Content

To learn more about augmented reality in healthcare, listen to the podcast AR Guided Surgery Transforms the OR: With Novarad and read AR and AI Change the Game in Medical Imaging. For the latest innovations from Novarad, follow them on Twitter at @NovaradCorp and on LinkedIn.

 

This article was edited by Erin Noble, copy editor.

OPC UA: Communicating in the Industrial Ecosystem

There are more devices connected to the internet—and to one another—than ever before. With IoT and AI proliferating in today’s smart factories, the number of sensors and the amount of data are both increasing all the time. In fact, Industry 4.0 might sound like a world of challenges and complexities. But it doesn’t have to be that way. Communication standards and cooperation among the players involved can go a long way toward translating those challenges into benefits for the industrial process.

A panel of experts discusses the efforts that the OPC Foundation is making to that end, and the potential for the OPC UA and other standards to transform communications in the manufacturing and industrial space. The panel consists of: Bernhard Eschermann, CTO of Process Automation at ABB, an industrial-digitalization leader; Stefan Schönegger, Vice President of Product Management at B&R Automation, a member of the ABB group; David McCall, Senior Director of Industrial and Consumer IoT Standards at Intel; and Peter Lutz, Director of Field Level Communications at the OPC Foundation.

What are the biggest challenges on the factory floor today?

Stefan Schönegger: It’s our customers who are facing the challenges, and security is really the first pain they are facing today. If devices in a manufacturing plant are even talking to each other at all, they’re not considering security.

But there is still a tremendous amount of equipment that’s not yet communicating, probably because it doesn’t yet have the capability to exchange data. It’s a very heterogeneous world, with equipment coming from different vendors and different vendors using different, primarily proprietary, standards. That hinders the introduction of advanced analytics, of data being transferred and converted into value. OPC UA and TSN are the answers to those problems.

Why has it been so difficult to get these devices to communicate with each other?

Bernhard Eschermann: In the past, multiple communication standards were developed by different companies, and none of them wanted to give their particular standard up. Another challenge is that we don’t have consistent information models for data when it moves from the instruments where something is measured, through the automation and to the edge, to central service and the cloud.

The OPC UA standard provides a way to have consistent information models between all of these different layers, and there’s no translation effort or loss of information in between them. All of that is very important to a dramatic change in the world of communication.

I always compare this situation with building railway lines between two cities. Instead of building one big railway line, which would be the most efficient way to get fast trains from A to B, we have multiple lines that are all slower. And with this new standard we should finally get this one, very fast railway line. 

What are the benefits of these digital technologies for the manufacturing industry?

David McCall: Right now, industrial communication tends to be wired, and the networking layer is tightly tied into an industrial automation protocol that runs over it. If you’ve got one automation protocol that means one network, and you can have trouble getting the data out from that confined ecosystem.

You’ve got two drivers: IoT in general; and then, more specifically, AI and machine vision, which I would put together. IoT generally means more devices, which means more connectivity. Not all of that is probably going to be mission critical, and not all of it’s going to be wired in the long run either. We’re expecting to see wireless technologies—whether that’s Wi-Fi or 5G—coming into those non–mission-critical areas initially where safety requirements or monitoring is being deployed quickly and cheaply.

Then you’ve got the AI/machine vision. Those workloads ingest huge quantities of data. Most of them are running in the data center and they may not be timing-critical, but there are huge opportunities for applying them to mission-critical, timing-critical loads. It’s blurring the lines a bit between what is traditionally thought of as IT technology or OT technology.

Long term we’re looking at having a single deterministic network that spans both wired and wireless technologies, and the workloads can just take the appropriate path. You’ll deploy the right technologies in the right places, and it will all be a homogeneous network that any protocol can take advantage of.

But it’s a huge amount of work, which is one of the reasons for trying to make sure that this is all going to work over one network. We put in that huge amount of effort once, and then everybody can use it. 

How is the OPC Foundation addressing these challenges with its open standards?

Peter Lutz: Stefan and Bernhard mentioned OPC UA, so I’ll give a quick summary of what is so special about it. It’s what we call an industrial framework to support interoperability, and it includes built-in security mechanisms, and mechanisms to do information modeling. This then drives the common semantics—semantics that are absolutely vendor neutral and vendor independent.

We are also working on extensions to OPC UA to enhance cloud connectivity, but also to bring it to the field for deterministic communication, motion control, instrumentation, and functional safety. We can establish it as a fully scalable industrial-communication solution from the field to the cloud, and also horizontally.

OPC UA FX is the term we use for the extensions to the framework that cover the various use cases at the field level. This includes, for example, controller-to-controller communications, but also communications from controller to field device, and field device to field device. And the solution is based on the existing OPC UA framework, so any company supporting OPC UA today can easily migrate or upgrade its products or applications to also support the extensions for field level.

“Long term we’re looking at having a single deterministic network that spans both wired and wireless #technologies, and the workloads can just take the appropriate path.” – David McCall, @intel via @insightdottech

What benefits might your customers see if they start utilizing these standards?

Bernhard Eschermann: Obviously you’ve got the benefit of mixing various types of traffic—nondeterministic event-based traffic and deterministic real-time traffic—on a single communication medium. This might mean, for example, connecting the control room to cameras in the field, to sensors in the field, to actuators in the field. It can all be done over the same communication medium without requiring separate wiring.

Having this OPC UA layer throughout the system—on different physical layers and on different transport protocols—means that the interpretation of data stays the same throughout the system. It also stays the same no matter which supplier any particular piece of equipment comes from. So it helps the customers because they can connect anything to anything. It helps us because it reduces the effort on our side in developing different mappings and adapters.

Why join the OPC Foundation to work on these standards?

Stefan Schönegger: In my opinion there is only one way forward, and that is adopting open standards, enabling open ecosystems, and going towards security. Not adopting open standards is a dead-end road.

And if you look a step further into the future, going towards more autonomous systems, you can’t manually interpret data or push data over gateways and still hope the semantics haven’t changed. Autonomous systems will require autonomously working analytical paradigms. And you will end up with capabilities that are only provided by OPC UA and FX.

What is Intel’s involvement in the OPC Foundation and these standards.

David McCall: We wanted to get involved to make sure that it was a really strong and viable standard—not just at the technology level but at the business level. We can put together demonstrators and be right there at the cutting edge, because we do see OPC UA FX as leading the way, showing how you can put together the technologies that are going to be absolutely critical in the next five to ten years.

What can we expect next from the OPC Foundation?

Peter Lutz: This is a framework; we are continuously improving our specs. One key technology is certainly TSN, because it provides us the deterministic transport, and it is also key for IT/OT convergence. We are also working on cloud connectivity. Overall, we have a big framework and multiple working groups, so we are continuously improving and updating to the needs of the industry.

Where do you expect industrial communications will go from here?

David McCall: Peter made the point that OPC has really succeeded in making itself the de facto future for the industry. The only question now is how quickly is that transition going to happen? Right now we see that all of the major vendors are looking to support their own existing protocols, plus OPC UA FX. I think OPC UA FX will gradually become the lingua franca of IoT right down at the control level. All the major cloud vendors are all standardizing on it. And then—particularly in greenfield sites and then gradually more and more across brownfield sites—it’s going to become the de facto communication protocol.

Stefan Schönegger: From the point of view of factory operators or equipment producers, if you just want to focus on the efficiency and output, or maybe on the adaptability of your factory and production line, and you’re asking: “How I can connect one device with another from a different vendor?” OPC UA is the answer.

Peter Lutz: I’m absolutely convinced that OPC UA—especially with the extensions we are currently working on for field level—will become the dominant industrial communication standard. There are two aspects to this. On the one hand I believe that with the framework, with the strict layering, with the flexibility, and all the features that OPC UA is providing it’s a future proof solution from the technical perspective. But it’s becoming the standard just because of its broad acceptance. And I think this is finally the success formula for a broad adoption of OPC UA across all different levels.

Bernhard Eschermann: What we should think much more about is how to get value out of the wealth of data available in the various factories and plants around the world in order to improve efficiency. I’m convinced that the world will benefit a lot, not just from the communication standard itself, but also from being able to make much more use of the data that is already available.

And a final thought: There are lots of places where competition makes sense—how to use data to create useful insights, for example. But competition doesn’t make sense in developing communication standards that are all doing the same thing.

Related Content

To learn more about OPC UA, listen to our podcast on Uniting Industrial Communications with Open Standards. For the latest innovations from ABB, B+R, Intel®, and OPC Foundation, follow them on Twitter at @ABBgroupnews, @BR_Automation, @Inteliot, and the @OPCFoundation; and LinkedIn at ABB, B&R Industrial Automation, Intel Internet of Things, and OPC Foundation.

 

This article was edited by Erin Noble, copy editor.

Why Software Is Playing Catchup to Edge Computing Hardware

The embedded computing industry continues to cycle between having the hardware lead versus having the software lead. Members on each side of the engineering wall take a turn, then wait for their counterparts to catch up in terms of feature sets.

Right now, the hardware is ahead with CPUs like 12th Gen Intel® Core processors, which provide the infrastructure to optimize different edge and enterprise workloads on the same system thanks to a mix of Performance and Efficiency cores. Especially when you pair these processors with embedded hardware that can accommodate higher performance and higher thermal design power (TDP) devices like PICMG COM-HPC modules, multiple software workloads can be consolidated onto a single hardware target.

Now it’s time for the software foundation to help scale that advanced hardware. And that advance can really come only in the form of a hypervisor.

Latest Core Processor Enables Workload Consolidation

Basically, workload consolidation unites multiple operations onto fewer platforms, thereby optimizing operations and making the platforms more scalable. This usually means that deterministic applications run on one core while enterprise or AI cores run on others. That separation helps maximize performance while also keeping the critical assets more secure. It also provides higher reliability, as fewer components mean fewer points of failure, while reducing the total system cost.

But note that original equipment manufacturers (OEMs) and systems integrators must employ software stacks in a way that capitalizes on the benefits of workload consolidation, which means maximizing system partitions. The amount of partitioning is typically based on the needs of the end application, including the use of a hypervisor, which builds on a microprocessor’s multi-core architecture. In particular, the Real-Time Hypervisor from Real-Time Systems (RTS), tuned for Intel Atom®, Intel® Core, and Intel® Xeon® processors, enables workload consolidation in edge environments on boards like congatec COM-HPC modules.

“The RTS Hypervisor can be easily configured to exactly meet your system requirements,” according to Christian Eder, Director of Product Marketing for congatec. “With the configuration file, which is basically an easy-to-generate text file, you can precisely assign computer resources and operating systems (OS) to different CPU cores and specify your preferences for the runtime environment. It is used as an input for the boot loader to partition the system into multiple systems.”

Combining #Intel’s 12th Gen Core processors with a strategy that incorporates workload-consolidation results in systems equipped with true #IoT hyperconvergence for mission-critical #edge and enterprise applications. @congatecAG via @insightdottech

The RTS Hypervisor supports virtualization technologies like Intel® Virtualization Technology (Intel® VT-x) and Intel® Virtualization Technology for Directed I/O (Intel® VT-d) for other devices, as well. It assures hard, uncompromised, real-time performance for the real-time OS that is running in parallel to other operating systems without interfering with time-sensitive functions and without adding any latency. For custom applications, RTS works directly with its customers to adapt the hypervisor for specific requirements. That could include deterministic solutions with multiple OSs.

Core-Specific Workloads on COM-HPC

An example of a COM-HPC module that can take advantage of workload consolidation is the conga-HPC/cALS, which is suited for industrial environments (Figure 1). It boasts up to 16 processor cores, a maximum memory footprint of 128 GB, up to 2x 2.5 GbE connectivity, and support for time-sensitive networking (TSN).

Image of congatec’s conga-HPC/cALS, featuring Intel® Xeon® D processors
Figure 1. The congatec conga-HPC/cALS supports up to 16 processor cores and 128 GB of memory for workload-consolidated deployments. (Source: congatec)

The onboard 12th Gen Intel® Core processor ups the performance over previous generations using PCI Gen 5 on the recently announced COM-HPC platform, which opens the door to fastest graphics and advanced AI functionality. AI acceleration is made possible (and simplified) through the use of the Intel® Distribution of OpenVINO Toolkit and can be realized in a workload-consolidated system because of the strict management and separation provided by Intel processors and a real-time hypervisor like the one from RTS.

There are some limitations that potentially throttle back performance. For example, Ethernet technology can now scale to 100 Gbits or more, and COM-HPC can handle that with the Server-type modules. Along these same lines, TSN simplifies a design as it slices up the bandwidth and reduces the number of required cables, aiding in the design’s real-time communication, which is vital in many automation and robotics use cases.

In many applications, real-time communication is executed over a 5G medium. Enhanced with network slicing, a platform can operate wirelessly in Industry 4.0 real-time deployments. Regardless, the networking stack will need to be managed by an operating system, and that operating system cannot be subjected to inter-process disturbances or resource constraints if controlling something like a robotic automation system. In these cases, the combination of 12th Gen processors with hardware virtualization support and the RTS Hypervisor ensure that ample system resources are available to ongoing tasks.

“The latest revision of the Real-Time Hypervisor already takes the Performance and Efficiency cores into account. Designers can define the best-suited core type for their real-time application, for example, define a P-Core for performance-hungry real-time applications or assign the efficient E-Core to the RTOS for smaller workloads. The clock speed should also be fixed in order to always guarantee predictable schedules for real-time operation,” Eder says.

Catching Up the Software Stack

As you can see, combining Intel’s 12th Gen Core processors with a strategy that incorporates workload-consolidation results in systems equipped with true IoT hyperconvergence for mission-critical edge and enterprise applications. This even applies to mission-critical automation systems that reduce the risk of equipment causing harm to people or damage to property due to malfunction or incorrect operation.

“Of course, having one system instead of multiple systems also helps in terms of reliability; just think about MTBF,” Eder says. “The more components you have, the more components can fail. If it’s two individual systems, the MTBF is half of having just one system.”

The 12th Gen Intel Core processor allows for efficient workload handling, meaning that dynamic clocking and core assignment can be employed. Any real-time threads are hosted on a separated virtual machine on a core with a fixed frequency as required by the application, while the less mission-critical non-real-time threads can be handled on an “as needed” basis.

By managing this with technologies like Intel Thread Director and a rigid hypervisor, developers looking to make the most out of their systems with workload consolidation can do things like more effectively manage system power while providing full real-time capabilities. By maximizing the capabilities of modern hardware, the software team may have finally caught up.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

AI Powers Auto Parts Factory Automation

With more than 300 million vehicles on the road, China is the world’s largest automotive market—a market that will only continue to grow thanks to the country’s booming electric vehicle sector. For auto parts manufacturers, this rapid growth presents a tremendous opportunity: a chance to carve out market share in an increasingly crowded competitive landscape.

But it’s surprisingly difficult for manufacturers to increase factory automation and production capacity. This is because they often rely on labor-intensive production processes, making it difficult to boost capacity without increasing headcount. The result is a frustrating capacity bottleneck that hampers growth and jeopardizes companies’ ability to gain competitive advantage.

Without Factory Automation, Manual Processes Don’t Scale

The manufacture of wheel hubs is a representative example of this phenomenon.

The main issue is that dynamic balancing inspection, an important quality assurance test, is specific to each wheel model. Wheel hubs from the production line must therefore be sorted before proceeding to the inspection area. But traditionally, this step in the manufacturing process is entirely manual.

Clearly, that’s not efficient. And because new wheel hub models are introduced frequently, companies are forced to retrain workers continually so that they can recognize new products.

“Manufacturers have attempted to automate wheel hub sorting and segmentation. But results have been mixed,” says Qinggaoe, Industrial Control and Vision R&D Director at Xinje Electric, a maker of AI manufacturing solutions for the automotive industry. “Solutions based on earlier machine vision applications have trouble recognizing wheels with complex structures—or differentiating between wheels with similar structures.”

But recent advances in AI and edge computing, together with next-generation processers, enable factory automation solutions that will help manufacturers operate more efficiently—reducing costs and increasing profits.

Recent advances in #AI and #EdgeComputing, together with next-gen processers, enable factory automation solutions that help #manufacturers operate more efficiently—reducing costs and increasing profits. Wuxi Xinje Electric via @insightdottech

Manufacturing Process Automation for Sorting and Segmentation

Xinje’s AI-based Wheel Hub Sorting and Segmentation solution is one example of how this works in practice.

The solution is implemented on the production line itself. Finished wheel hubs reach an initial inspection point, where an image acquisition device detects each hub and transmits the data to a nearby edge server for processing.

A computer vision application sorts the wheel hubs by type and marks them accordingly. The AI model is based on advanced deep-learning technology, which allows it to achieve a degree of accuracy far superior to older machine vision solutions. Moreover, because processing is carried out on the edge, network latency is greatly reduced, which speeds up inferencing.

Hubs are then moved down the line for segmentation—also handled by computer vision inferencing on the edge—so that they can be sent to the appropriate testing area for dynamic balancing inspection.

The solution represents a fully automated version of the traditional sorting and segmentation process. This accomplishment was facilitated Xinje’s technology partnership with Intel, Qinggaoe says. “Intel processors are very well suited to computer vision and edge computing tasks, and their software development tools are a great help when building and training AI models.”

Xinje leverages several different Intel technologies in its solution:

  • 11th Gen Intel® Core processors offer a strong foundation for high-performance processing—especially where graphical and AI computing tasks are required.
  • The built-in Intel® Gaussian Neural Accelerator 2.0 supports AI applications and offers inferencing and training support for deep-learning models.
  • The Intel® OpenVINO Toolkit and Intel® oneAPI libraries help simplify development of computer vision applications and facilitate acceleration and optimization of AI models.

AI Factory Automation: A Case Study

Xinje’s implementation at a wheel hub manufacturer in China is a case in point. The customer was looking to increase production to meet soaring demand. But like many other manufacturers, it still relied on manual wheel sorting, creating a capacity bottleneck.

Hiring more workers wasn’t feasible—and not just because of the high cost. “Even if you have the labor budget, hiring isn’t easy in China at the moment,” explains Qinggaoe. “The manufacturing sector here is experiencing severe labor shortages that threaten productivity and profitability.”

Deploying the Xinje solution led to great results. Wheel hub classification became up to 18 times more efficient, helping the company to dramatically increase its production capacity. And the manufacturer was actually able to reduce headcount. The number of workers required to supervise the production resulted in labor cost savings of approximately 75 percent.

Best of all, there was no need to sacrifice quality for speed, because the AI model reliably achieved an accuracy rate of 99 percent.

A Smarter, Safer Future

Automating inefficient processes will help China’s auto industry meet growing demand from domestic and overseas buyers.

But the benefits of AI factory automation won’t be limited to one industry, says Qinggaoe: “There are many potential applications for AI in manufacturing. Our solution, for example, supports material classification, testing, and the manufacture of computers and consumer electronics.”

In addition to benefiting companies and consumers, AI factory automation will help workers as well. Qinggaoe says, “AI complements and extends human intelligence—and it frees workers from the need to perform repetitive and dangerous tasks in the factory.”

By improving productivity, profitability, and employee health and safety, AI will drive the digital transformation of manufacturing around the world.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

AR Guided Surgery Transforms the OR: With Novarad

Healthcare is in the midst of a technological revolution. Just look at the past couple of years as telehealth, remote monitoring, and digital data access have taken patient care to previously unimagined levels. And now augmented reality (AR) promises to further transform the healthcare experience by making it possible to simulate diseases, visualize surgeries, and strengthen both diagnoses and treatment.

In this podcast, we dive into the promise of AR, and examine how deploying this interactive technology is a win-win-win for the healthcare industry. Specifically, we learn how AR helps put patients at ease by keeping them informed and empowered. We also hear how AR improves and streamlines physicians’ training. And we detail how AR guided surgery makes a huge difference for surgeons and physicians in the operating room.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guest: Novarad

Our guests this episode are Shae Fetters, Vice President of Sales at Novarad, a provider of advanced healthcare solutions; and Dr. Michael Karsy, Assistant Professor of Neurosurgery, Neuro-oncology, and Skull Base at the University of Utah.

At Novarad, Shae works to help physicians feel comfortable using advanced technology like its augmented reality surgical guidance solution VisAR, and provides additional support as needed.

At the University of Utah, Dr. Karsy helps train the next generation of physicians and works with partners like Novarad to introduce new solutions for both physicians and patients.

Podcast Topics

Shae and Dr. Karsy answer our questions about:

  • (2:51) How augmented reality transforms the healthcare space
  • (5:52) Introducing AR in operating rooms
  • (9:47) Educating staff and patients on the use of AR technology
  • (12:10) Hardware and technology components of AR-guided surgery
  • (16:27) Partnerships enabling this technology in hospitals
  • (21:01) AR compared to traditional OR methods
  • (23:05) The future of AR in healthcare
  • (24:12) Additional opportunities for AR in the future

Related Content

To learn more about augmented reality in healthcare, read AR and AI Change the Game in Medical Imaging. For the latest innovations from Novarad, follow them on Twitter at @NovaradCorp and on LinkedIn.

Transcript

Christina Cardoza: Hello, and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Associate Editorial Director of insight.tech. And today we’ll be talking about how augmented reality is transforming the healthcare space with Shae Fetters from Novarad and Dr. Michael Karsy from the University of Utah. But before we dive into our topic, let’s get to know our guests. Shae, welcome to the podcast. What can you tell us about yourself and your role at Novarad?

Shae Fetters: Hi, thanks for having me. Currently I’m VP of Sales over the Western United States, but really I work across the whole nation, and internationally as well, helping physicians like Dr. Karsy feel comfortable using this technology and providing support to them as needed. So, that’s a major part of what we do here, is helping these doctors be comfortable with this new technology and helping it benefit their patients and results.

Christina Cardoza: Absolutely, and of course the conversation is just strengthened by having Dr. Karsy here so that we not only will learn about what this technology is and how it’s being implemented in hospitals, but we can actually learn about the benefits directly from doctors. So, Dr. Karsy, welcome to the show, and please tell us more about yourself.

Dr. Karsy: Thank you so much for having me, Christina and Shea. So, my name’s my Mike Karsy, I’m an Assistant Professor of Neurosurgery, Neuro-Oncology, and Skull Base at the University of Utah, and got really interested in augmented reality as I was looking at different imaging modalities for surgical treatment, and heard about Novarad, which is just an incredible company, homegrown out here in Utah, out in American Fork. And we’ve been collaborating together on various research projects, and sort of moved augmented reality towards the clinical space and found where the applications lie and try to develop the technology together. And it’s been just a great partnership with the company, and to learn from them and be able to help us get implemented for treating our patients.

Christina Cardoza: Can’t wait to hear more about that. I think this is going to be an exciting conversation. Because we’ve seen over the last couple of years huge transformations in the healthcare space already, to mainstream telehealth adoption and remote monitoring. But advanced technologies like augmented reality are enabling even more opportunities, and it’s benefiting both patients and providers. So, Shae, I’m wondering if you could set the stage for us and kick off the conversation. How is augmented reality transforming this space, as well as what are some of those new opportunities now available with this technology?

Shae Fetters: I think we’re just really on the forefront of this evolution. We really don’t understand the full depth of what the possibilities are at this point. Dr. Karsy has a vision of how this can be used; we’ve got a different vision as well that helps merge everything and make it possible.

Currently we’re looking at—we take medical imaging—simple X-rays and CT scans and MRIs that patients get that doctors use to help diagnose what’s going on. And we’ve been using that technology for a lot of years at Novarad. We’re now taking that and merging that with AR in order to let those doctors, those surgeons and proceduralists, have that technology at their fingertips doing these procedures. We’re not looking to replace what they do, we’re just looking to give them little bits of extra information to make those procedures go smoother and more seamlessly, to get better results for those patients and get them back to their lives quicker. So really we’re looking at it in huge—a lot of different realms—whether it’s a highly complex surgery, whether it’s something routine that would be made safer by having that imaging on hand and ready to use.

We look at it also as patient education. We find that a lot of patients, they’re still—they’re worried about what’s happening during a procedure or what the doctor’s trying to explain to them. And I think sometimes there’s this disconnect, and doctors do the best they can to try to explain what’s in their head and what they understand of this world of our complex bodies to the patients. And augmented reality helps give the patients a vision and see exactly what it is that surgeons are talking about, and it helps them feel more comfortable and more part of their care. And that helps get better results all around if patients are understanding those things and feel comfortable with that.

We also see it in training new physicians, both in surgery and their clinics, streamlining that. And we’ve done some studies with Dr. Karsy that show augmented reality in training—it substantially increases the effectiveness and the quickness of the learning. And that’s really just really touching the surface of what the possibilities for AR are in the healthcare space currently.

Christina Cardoza: Yeah, it’s amazing to hear, because when I think of augmented reality, it’s typically consumer-focused or gaming and, of course, technology has just advanced so much and it’s bringing all these new possibilities, like you just mentioned. And Dr. Karsy, you mentioned this a little bit in your intro, but I would love to learn more about how you heard about the use of augmented reality in operating rooms. What made you come to Novarad, and what are the benefits you’ve been seeing so far?

Dr. Karsy: Yeah, so, augmented reality is pretty new and a lot of times hard to even just explain what it is to a patient or another physician even—what is it that you’re actually seeing? And image guidance in neurosurgery has been around for quite a long time. It started at a very rudimentary level and it’s essentially now standard of care, where we constantly use a guidance system to allow patient DICOM images, MRIs, or CT scans to be used intraoperatively to help with surgery.

And so, as I was hearing about augmented reality, almost to see if it was—where it was at in the healthcare space—you hear about it in gaming and in commerce and other industries—as I was looking around the healthcare space I came across Novarad and reached out to them. And being in Utah as well, I thought it’d be interesting to work with this with the group. And they were way ahead of everybody else that I could see. I mean, they had a technology that was FDA cleared for preoperative planning, as well as intraoperative use for spine instrumentation. We worked on a number of cadaver projects; we published a paper in The Spine Journal on accuracy of medical instrumentation with augmented reality. And then we published another paper looking at accuracy of ventriculostomy drain placement in the head using augmented reality, and just had some really impressive results from both of those papers and our discussions with other physicians.

And so, for me, I thought this could be kind of the next step in imaging for patient care. I mean, I could see already the benefits of navigation for surgical care in spine in making things minimally invasive, as well as in cranial applications where you can really hone in on complex cranial anatomy, and I thought that this is just going to be the next step.

And I think Shae alluded to something which was quite interesting is that most surgeons as they get more developed, they develop essentially a mind’s eye kind of X-ray vision of patient anatomy and what they expect to see in surgery. But that takes years to develop—you know our residency training is seven years long. If you do additional fellowships, that’s an additional year. That’s just neurosurgery. And if you think about every other surgical specialty has all this other anatomy and nuanced instrumentation and procedures to learn, that each field becomes more and more complex. And what I thought with augmented reality was you could essentially take these holograms and DICOM images, exactly what you see in two-dimensional space, and apply it directly to a patient right in front of you.

And so we’ve shown patients their images, we have used it for presurgical planning and trying to kind of manipulate the hologram in a position that a patient would be positioned in surgery and kind of see if your mind’s eye is able to better recognize the features you’re looking for. And we’ve also used it in surgery to help identify lesions. And what Shae mentioned is that we really are at the beginning of how augmented reality is going to be used in healthcare. Like we don’t know the best applications for it, we don’t know exactly where it adds the specific value for specific cases because this technology is so new. All those things are yet to be discovered, and it’s really, really exciting to be able to work on it and see where it’s going to go.

Christina Cardoza: I’m sure, and I’m sure it’s also important to have a partner like Novarad that, when you think about these new applications and possibilities, having a partner that you can work with to really make those dreams a reality. And so you mentioned that sometimes it’s even hard to describe this to patients and let them understand what it is you’re actually doing and how it works. So I’m wondering, how do you get patients as well as your staff comfortable with using this technology? How do you implement it in the current learning or operating structure you have?

Dr. Karsy: Yeah, I mean you’re basically asking how does it fit into the current workflow. It’s developed. Basically my partnership with Novarad has been over a year, and we can see this technology continuously change and develop in how—basically ease of use, the verbal commands you use to use the system and be able to record videos that you could just show patients and kind of demonstrate things that way. There’s a lot of features that have been added over the last year that I’ve been working with them.

In terms of their clinical workflow, the Novarad company has the—it’s basically a cloud-based system. So, there’s software that they have allowed me to install here at the U to be able to upload images to their cloud-based system, generate these hologram images, and then you can download them directly to our headset just by Wi-Fi—like you don’t actually need a terminal system or computer sitting at your site, you basically just need internet access and you can get this thing up and running.

So that’s been kind of exciting, and I’d say our staff is definitely not yet comfortable with this. I think it’s so new that they don’t—we basically—I set it up for them, and if a surgeon’s telling you that they’re setting up imaging, that means that it’s going to be easy to set up, because surgeons are notoriously short of patience and don’t want to work on getting things set up. But if I’m setting it up and able to do it, and I basically do that and show our staff like, “This is what the hologram looks like. This is what it looks like on a patient.” This is kind of the first time many of them, many of our staff have worked on patients for decades, and they can finally kind of see in three dimensions sometimes what these tumors will look like, and it’s kind of incredible what they—we’ve always had our staff kind of be wowed whenever it’s the first time they see what augmented reality actually looks like.

Christina Cardoza: Yeah, and it sounds like it may be a little less invasive to the patients also. So, I want to nail down a little bit into the technology components that actually go into making this possible. Dr. Karsy mentioned cloud-based software, among other things. So, Shae, I’m wondering if you can expand on what are some of the hardware or technology components that go into this, and how Novarad is making AR-guided surgery possible.

Shae Fetters: Really it comes with a core foundation of the company. The company as a whole has been dealing with medical images and providing these picture archival systems for hospitals for about 30-plus years. So that’s been a huge part of the foundation of knowing how to use these images in the first place. We’ve been doing that on computers with doctors for a lot of years. We’ve been able to change what the monitor is now for these doctors, where we’ve been able to condense it down to these headsets.

That’s been a really tricky task that’s come with a lot of trial and error over the last—we’ve been working on this particular project for about seven to eight years now. Part of that has come from some of the amazing talent that we have in house—some of the computer programmers, the AI developers, and things like that. Some of it’s come from just the vision of the CEO of our company; he’s done an amazing job as well. Being able to condense it down so that all that you need in the operating room is this headset that Microsoft makes has been pretty amazing.

Really it has come from years of working with that technology. Like Dr. Karsy talked about, all you really need once you get into your surgery is a Wi-Fi connection. We’ve got it to the point where you can look at a simple QR code, and it downloads your patient’s study onto the headset in about 20 seconds or so. And then to link the technology, to link that to the patient—if that patient has special codes put on them worth a scan—again, that whole process to download, to calibrate that lens, to let it learn the space that it’s working in that day, and then match the images to the patient’s anatomy accurately it takes a total of about two and a half minutes at this point.

And Dr. Karsy talked about this a year ago—that was not the case. We’ve continued to develop with a great partnership with him on figuring out better workflows so that this isn’t as scary and complex for surgeons and different hospital staff, but trying to get the right mix of flow so that everybody’s comfortable using these. And so it’s not a big deal when the doctor says partway through a case or before a case, “Hey, grab my headset, we’re going to guide this with VisAR.”

So that’s been really great to continue those things, and we’re continuing to refine that. It’s not quite to the point where it’s absolutely amazing, but it’s pretty mind-blowing at this point that we are able to do that without having multiple pieces of huge equipment in the OR. I think Dr. Karsy talked about that: currently with traditional navigation at this point this becomes standard in the operating room. It actually takes several pieces of equipment to put into the OR suite. And those OR rooms, they seem to become smaller and smaller with all the equipment that needs to be packed in in order to provide the right care for these patients.

Christina Cardoza: So, you mentioned VisAR and that, of course, is the Novarad solution that Dr. Karsy and his team are leveraging to make this possible. And you also mentioned working with Microsoft and this headset; the power of the partnership with Dr. Karsy and his team; as well as maybe some of the other equipment and manufacturers you’re working with. You know, the power of partnerships is something big on insight.tech. We’re seeing that to make all of these things that we envision for the future possible it really takes the partnerships that you’re talking about. And I should mention insight.tech and the IoT Chat as a whole, we are Intel® publications. And so I’m wondering if you can tell me a little bit more about how you work with partners like Intel as well as Microsoft, and what really is the value from Novarad’s view working with them?

Shae Fetters: I mean, it’s huge for us to be able to do everything in house to develop the complex—really the power of the microchips and the power of the HoloLens—that whole hardware end of things, there’s no way Novarad as a company—we’re only about 150 people right now; we’re a fairly small company by industry standards, which is pretty amazing that we’ve been able to produce this type of technology with that, but that’s only been possible through different partnerships.

I mean, Microsoft has an amazing headset that they continue to develop as well. We started with their first version several years ago, the HoloLens 1. And thanks to innovations in that, we’ve been able to do additional things once they released the HoloLens 2. Different possibilities that the first version couldn’t handle. But for us to really spend the resources and the time to develop the headset on top of all the software, I don’t think we could ever keep up with Microsoft. The inside parts of that headset, the chips by Intel, I mean we don’t produce microprocessors here, that’s not what we do. So it’s great to have a collaboration with companies that also have some great innovative minds and some amazing talent.

We do work with other companies because we’re not looking to change what these surgeons do, necessarily. Dr. Karsy might have a certain set of instrumentation that he likes to use in his procedures, whether it’s cranial, where his focus is, or spine, where he also does procedures. We’re not looking to lock surgeons into only what we want them to use. We just don’t think that’s very fair, and it really doesn’t lead to the best patient care overall. And so we try to work with most of these companies that come to us and say, “Hey, we’d love to partner with you so that we have a collaboration.” And that’s been amazing for us.

And really we couldn’t have done it without brilliant minds like Dr. Karsy. And there’s a lot of doctors across the country that are excited about this and share their different ideas. And it’s very much an open and collaborative network that we’ve got that we’re working with. There’s not competition between us, and there’s not competition between the surgeons even; they want what’s best for patient care and moving healthcare forward. It’s not a selfish venture.

Dr. Karsy: I was going to definitely agree with what Shae said. You know, from our perspective working with industry has been a great partnership as e no way one single academic center is going to develop 30 years of experience within PACS and imaging and DICOMs like Novarad has, and then develop augmented reality on top of it.

I think that we’ve, from a clinical standpoint, really focused on what we do well, which is the surgery part and the clinical part and that management, and partnering with the group that really has expertise in radiology and augmented reality—that’s kind of the way to innovate. I think we’ve have really good success by doing it that way.

Christina Cardoza: And of course surgery is such a delicate process, I can see the importance of working with big technology giants like Microsoft and Intel, making sure that the technology that goes into this, the performance is fast and optimized and you’re getting all this information in real time because that’s really going to count. And I just know that Intel chips and processors are getting better and faster every year.

And so, Dr. Karsy, we mentioned a couple of other different transformations in the introduction, like telehealth and remote monitoring. Of course this does not compare to the augmented reality portion that you can bring into surgery and operations, but I’m wondering how this technology compares to some of the advancements that you’re seeing in the healthcare space, or how it even compares to the traditional way of things that you guys have done.

Dr. Karsy: Yeah, thank you. I mean, I think what’s interesting is the technology as it develops—and we talked about the Intel chips—and as it becomes stronger, better processing speed, and is able to handle more operations, it generates newer technology that can do more. Well, that’s kind of what we’re seeing in healthcare, is as soon as we have this new technology—a better HoloLens, a better augmented reality system, software that works faster—it changes everything you can do. You have now the ability to go back to the procedures that we used to do the same old way for decades, and you can now do them differently; you can do them minimally invasively with smaller incisions; you can get more accurate.

I think AR overall, if you look at the different spaces, and I’m by no means an expert in AR, but when I’m looking at different spaces like gaming or commerce or healthcare, healthcare has this incredible impact right away. And it’s been using images for decades; it’s like the perfect setup for seeing this technology be implemented, it really is. The other technologies in healthcare, at least in the neurosurgical space, really focus around becoming less invasive. So, they have to do with spine instrumentation that you can do minimally invasively, technology to minimize the size of craniotomies or even eliminate craniotomies.

If there is a day that I didn’t have to do another brain surgery that would be a great day, because it would mean that we have developed enough medical therapy and radiation, and other things that you can avoid needing surgery, and it will take a long time to do that. But that’s the ultimate goal of this, is to reduce the harm to patients as much as possible, and technology drives the way forward, one hundred percent.

Christina Cardoza: And of course we mentioned that this is only scratching the surface; this is just the beginning of what this technology can do and what we hope it will do in the future. So, I’m wondering what hopes do you have for AI-guided surgery or advancements in technology in the healthcare space in the future and going forward?

Dr. Karsy: From my standpoint I think that imaging technology has been widely used in neurosurgery and we can see how navigation has changed the way we do things. And a lot of times when we collaborate with other surgeons and they come into our wards and they see our navigation systems, they’re wowed by it because such a thing doesn’t exist in many of these other surgical fields. It hasn’t—we haven’t had the ability to do that. I think AR will change all that.

I think you’re going to start to see image guidance and minimally invasive procedures enter into other surgical subspecialties and spaces in healthcare that it didn’t exist before because we just didn’t have a way or need to do it. And now we have a technology that can potentially aid those other fields. So I think that will change everything, lead to new procedures, techniques, all sorts of things. And this will be a many-decade kind of development; it’s not going to be all at once, but that’s what I see happening.

Christina Cardoza: Of course, and Shae, how do you hope VisAR and Novarad in general will be a part of that future, and how will you continue to bring more opportunities and possibilities to the healthcare space?

Shae Fetters: Yeah, we’ve got a similar vision to Dr. Karsy in that. We think of the new technologies that come about in healthcare and how it changes things. I mean, I just think a lot, just alone of X-ray and CT and MRI, and how we can put people in this little machine for a couple minutes and then we can tell from these black and white pictures what’s going on inside them without needing to open them up and look at those things physically. I mean, that was a pretty revolutionary thing, very less invasive.

We see AR as a similar move in this realm of this new wave of how we treat patients, and the information available to surgeons and patients alike so that there’s greater understanding. And Dr. Karsy will probably speak to this as well—there’s a huge collaboration between surgeons and their patients as well that we don’t really talk about much, but that patient ultimately has responsibility for their care as much as the doctor does. And when that patient does understand more because it’s not this foreign black-and-white image of an MRI that, really, I don’t remember how many years it took me to learn how to read CTs and MRIs, because it’s such a foreign language. AR, it makes it much more comprehensible, I mean much more understandable for not only patients, but also new physicians coming out.

We really do see this as in the next 10 years or so that this is just a standard of care that every surgeon, every doctor has their own headset and it’s a core part of how they treat patients and how they interact with them.

Christina Cardoza: Yeah, that’s a great point—making this technology and healthcare in general more accessible and consumable for patients is going to go a long way to making this technology really take off and be mainstream adopted. Unfortunately, we are running out of time today, but this has been a great conversation. And so, before we go, I just want to throw it back to each of you for any final thoughts or key takeaways you want to leave our listeners with today. So, Dr. Karsy, I’ll start with you.

Dr. Karsy: Thank you again, Christina and Shae, to have the opportunity to speak with you guys. I’d say with AR technology, it’s one of those things that the more you learn about it, the more you’re going to start to envision where else could this be applied? And I think in healthcare that’s really exciting because there’s still so much that’s unknown. We’re learning so much every day in every field. And to have new technology that allows you to then continue to come up with new discoveries and implement them and, as Shae said, teach our patients better and make it more intuitive, that’s great. I think that’s the way things should be going forward, and I’m really excited to see this continue to develop in my career.

Christina Cardoza: Absolutely. And Shae, any final thoughts or key takeaways you want to leave the attendees with today?

Shae Fetters: Yeah, I just think that this really is an amazing technology that we’re coming across. And really trying to explain this to people, it’s difficult to find the right words. When you can put a headset on somebody’s head and within 30 seconds to a minute—now I always watch their faces when I do this, when I talk to new doctors that have not used this technology at all. And when I walk them through some of the basic function of this, every time within 30 seconds to a minute I see their eyes open big and they understand; they start to see the potential possibilities out there. And so I encourage any of these physicians, any of these big, I mean these healthcare systems that are looking to improve their patient care to reach out it. It’s worth a conversation on how we can collaborate to provide better care for all our patients in our communities.

Christina Cardoza: It’s great seeing how this technology is being used today, and I can’t wait to see what else you guys come up with in the future. I just want to thank you both again for the insightful conversation and joining the podcast today. And thanks to our listeners for tuning in. If you liked this episode, please like, subscribe, rate, review, all of the above on your favorite streaming platform. And, until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Smart Retail Solutions Transform the Shopping Experience

It’s Friday night, and you’re invited to a friend’s house for dinner. You want to impress them with a nice bottle of wine, but there are hundreds of options. Imagine if you could walk into a wine store and be virtually transported to the vineyard where the bottle of Cabernet you’re contemplating was produced. You “meet” the vintner and discover that the varietal is a perfect accompaniment to the main course. Confidence in your selection soars, you purchase the bottle, and you’re on your way to a relaxing evening.

A wine chain in Germany is providing customers with this exact type of in-store experience. Founded in 1811, Rotkäppchen-Mumm is the largest producer of sparkling wine in Germany, with stores operating under the brand Ludwig von Kapff.

Its CEO had a clear objective: build the most modern wine store in Germany and do it with “invisible” technology that takes shoppers on a journey. The company created a unique interactive shopping experience by deploying the Digital Retail Media solution by digimago GmbH, a smart-signage solutions manufacturer.

“Wine store operators can give customers a small glass to taste, but what if they could bring the winery to them?” asks Andre Bartscher, CEO of digimago. “Wine is all about storytelling—stories about the winery and the makers. People who produce wine are often strong and interesting characters.”

Digital Signage Transports Wine Store Customers to the Vineyard

Customers who shop at Ludwig von Kapff will find digital signs running promotional content inside the store. Real-time digital signage drives the output, choosing the best products for the audience, time of day, and weather conditions. For example, a crisp Sauvignon Blanc might be promoted on a warm summer evening while sparkling wine can be featured during the holidays. Digital signage is also synced with POS data and retail analytics. If a product is low in inventory or out of stock, for example, it’s no longer advertised on the screen.

The solution uses near-field object recognition, RFID tags, and an invisible sensor surface to create an interactive experience. If a shopper wants more information about a wine, they can place the bottle near the screen. The passive promotional loop automatically shifts and displays information about the specific wine, such as price and taste notes. Shoppers can learn more about the winery and maker by tapping on the touchscreen. In addition, customers can place a second and third bottle near the screen to compare wines (Figure 1). And when the bottles are put back on the shelf, the screen automatically returns to the promotional content loop.

A wine store displays bottles of wine with in-depth information presented on a digital sign, improving the in-store experience for its customers.
Figure 1. Shoppers can learn more about wines by placing bottles near invisible sensors that automatically display interactive content. (Source: Rotkäppchen Mumm Sektkellereien GmbH)

The technology is equipped with #AI-powered #PredictiveAnalytics, providing alerts when maintenance is required, and which pieces need to be replaced soon. Digimago GmbH via @insightdottech

digimago Digital Retail Media runs high-quality graphics that require high-performance computing power. The solution incorporates Intel® technology, including processors and the Intel® OpenVINO Toolkit.

“The interactive content is not video,” says Bartscher. “It’s really a real-time rendering, and Intel processing power makes it possible.”

Since it was deployed, the digimago solution has helped Ludwig von Kapff provide outstanding customer experiences while increasing sales. “Visitors describe the atmosphere as being engaging,” says Bartscher. “The technology helps start conversations between the shoppers and the staff. It provides a great ambiance and gets customers in the mood to taste and buy the wine.”

In addition to providing the hardware and software, digimago offers retailers peace of mind with monitoring services. If there is a technical or software problem, digimago will receive a notification. In some cases, the situation can be solved remotely, even before the customer knows there’s a problem. The technology is also equipped with AI-powered predictive analytics, providing alerts when maintenance is required, and which pieces need to be replaced soon.

Smart Technology Drives Omnichannel Retail

Rotkäppchen-Mumm’s use of smart retail solutions not only engages customers; it becomes the central part of its omnichannel strategy. To deliver omnichannel experiences, Bartscher recommends that retailers identify what type of content will be most relevant to its potential customer inside and outside of their stores.

“What information could influence them to buy your product?” he asks. “What experience could create a better time in your store for the customer? Think about the types of content and digital assets you already have. Then determine how you can create content for the different resolutions and screen formats.”

To make this possible, digimago’s rendering engine can generate animation and content in real time, using merchandise data as well as content from a company’s website and social media channels.

Bartscher predicts that as digital-signage technology evolves, it will become more closely integrated into retail analytics, continuing to put a strong focus on the customer.

“Studies have found that up to 70% of buying choices are done at the point of sale,” he says. “With digital signage, you can have a good channel to influence customers. Instead of having static content, you can have moving animated visuals that catch attention. When visitors stay longer, they will probably look at more products and buy more things.”

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.