Digital Tech Gives Financial Institutions a Competitive Edge

Financial institutions (FIs), with their gleaming floors, plush chairs, and coffee stations, are welcoming places, yet there is a sense of change in the air. Tellers stand ready to assist customers as soon as they walk in, but many people now choose to accomplish financial tasks remotely, depositing their paychecks on a mobile phone or applying online for a loan. Others line up at the drive-through, where staffers reach through a window counting out cash.

Influenced by the ease of online shopping, today’s consumers seek more digital-first access points, greater convenience, and better personalization—and that’s making bank executives reconsider the way they deliver services, says Andrew Aceto, Executive Director of Marketing for NCR Banking. NCR Corporation, a leading enterprise technology provider, helps financial organizations transform, connect, and run their business to meet the needs of today’s customers and the bankers who serve them.

As FIs mull their options for change, fintechs have responded with stand-alone services for key financial products including loans, credit cards, mortgages, and investments—all available to consumers without a bank visit.

“Traditional banking services are being disaggregated, and all lines of business are seeing change and new players,” Aceto says. “In the face of these macro forces, financial institutions need to ask, ‘How can I drive profitable growth? How can I deliver an exceptional experience that differentiates my brand?’”

While there is no one-size-fits-all answer, by creating the right mix of innovative technologies and in-person services, FIs can improve the experience of both customers and employees, while at the same time making their operations more efficient.

Like #ATMs, ITMs are stand-alone self-service #kiosks, allowing customers to make deposits and withdrawals. But unlike traditional ATMs, ITMs also offer #remote consumer assistance at the touch of a button. @NCRCorporation via @insightdottech

Improving Efficiency and Convenience with Interactive Teller Machines

With modern technology, FIs can give customers the personalized attention they want while deploying staff more effectively. Interactive teller machines (ITMs) are a great example.

Like ATMs, ITMs are stand-alone self-service kiosks, allowing customers to make deposits and withdrawals. But unlike traditional ATMs, ITMs also offer remote consumer assistance at the touch of a button. That’s a great convenience for customers who don’t want to be confined to branch hours—and for tellers, who can serve customers from home or the call center, driving operational efficiency while extending hours of service.

In the traditional model, every branch requires staff to handle the everyday transactions for their customers. Using ITMs, the financial institution can serve customers in multiple locations from their central office, allowing branches to shift staff focus to advisory and consultancy services. Powered by enterprise software and powerful Intel processors, personalized interactions via the ITM help enable the delivery of customer service and brand reputation for remote locations.

“You’ve just gone from a model where many of your transactions require the banker to handle cash, coins, and checks to one where they don’t do it at all,” Aceto says. “They don’t have to balance a cash drawer, so they can open their shift and close at the end of the day quicker. Transactions are much faster, and tellers can have a better, more intimate conversation with the customer because they’re not spending time counting money.”

With the efficiency they gain through ITMs, FIs can transform and modernize their branch network to refocus their workforce and shift the branch format to one that meets the needs of the local consumer. That may be a city center landmark branch offering every service and specialist, or a small digital-only branch in a residential area. Broadening the type of branches can help optimize the branch strategy and enable capital to be invested in transforming other consumer touchpoints with the brand.

Instead of managing routine transactions, in-person staff can engage in more complex interactions and offer new services. For example, a bank in Connecticut is hosting weekly training sessions on QuickBooks to draw local business owners.

“Small business owners have much higher deposits than others and are a very important customer for most banks,” Aceto says.

Simplifying Operations with a Cloud-based Infrastructure

Managing ITM transactions and other digital services in the cloud allows the FI to unify procedures across locations, delivering further efficiencies and streamlining the customer experience.

“Traditionally, FIs build their technology specific to the channel. There’s a budget for ATMs, a budget for branches, a budget for the contact center, etc. With the cloud, you don’t have to build and code services separately—you can develop a service centrally, then connect it to any channel through APIs and microservices,” Aceto explains.

That saves time and money for FIs. And switching to the cloud eliminates the expense and hassle of managing infrastructure. “You don’t have to worry about buying racks, blade servers, storage space, network connectivity, and security. Someone else takes care of that so you can focus on the business,” Aceto says.

For customers, unified cloud-based procedures make interactions with the FI smoother and easier.

“With our platform, a customer can start opening an account online, stop and call the contact center, then go to the branch with their business partner to finalize it. Everyone uses the same application,” Aceto says.

The Future of Digital Banking Technologies

Transferring existing services to the cloud is just the beginning. With an edge-to-cloud infrastructure, FIs can easily tweak their processes or create new ones as technology and consumer expectations change.

“We have a customer who removed the traditional teller system from their branches, which was a great deal of significant and expensive infrastructure,” Aceto says. “They put tablets in bankers’ hands. Bankers walk around and help people check a balance, do a transfer, make a payment, or change an address.”

In addition to making interactions more personal, tablets are more intuitive and reflect the technology people use daily, reducing staff training. Employees prefer the popular devices, giving the bank a competitive advantage in a time of labor shortages, Aceto says.

Aceto sees ITMs and tablets as part of an endless chain of improvement stretching across banking’s digital and physical realms.

“The future is not digital-only, it’s digital everywhere,” he says. “It’s a consumer-led dialog about reshaping services to deliver a better experience, and at the same time, driving efficiencies.”

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Level Up Learning with Gamified AI in the Classroom

With AI’s rapid evolution, we’re witnessing a monumental societal shift as it influences our day-to-day lives. In many ways, school-age children experience this the most. So, how can we nurture innovation in the classroom and teach them new skills? How can we equip them with an essential foundation for responsible AI use and provide them with ethical guidelines for developing AI that benefits society?

meldCX, a provider of AI and edge computing solutions, tackles these challenges. “When AI is used appropriately and with the correct strategy, it augments existing resources to assist students in a one-to-one fashion and tailored to individual needs,” says Stephen Borg, Group Chief Executive Officer of meldCX.

But there are barriers in adopting AI, such as concerns for student privacy, limited budgets, and lack of skills needed to integrate AI in the classroom. That’s why Borg and his team developed the AI Playground, a cost-effective and safe environment for students to experience and explore AI.

Building AI-Based Solutions in the Classroom

The AI environment is presented to students as a Lego-based competitive game. For instance, students work with brick kits that come in a range of levels—from a beginner pack of six to an advanced 100-piece unit—to create an AI model. A camera is used to help guide students about where to place the parts. And to protect their privacy, and to model ethical use of AI, the camera is situated to focus only on the students’ hands.

In one example, students created a Mars Rover model. Once it was complete, they could use the playground’s software to “launch” it into space, land it on Mars, and explore surrounding areas (Video 1).

Video 1. With meldCX’s AI Playground, students created a Mars Rover model and launched it into virtual space. (Source meldCX)

The AI software can stream educational information about the Red Planet, subtly making it a science-based learning lesson. Students can also compete against one another in timed events to see who can launch their Rover in the shortest amount of time.

AI Playground was built in collaboration with Intel and the University of South Australia, and leverages meldCX’s vision analytics platform Viana to provide insights into the lesson. In the Mars Rover example, the game uses AI-based object detection programs to recognize the Lego bricks and various stages of the Rover creation process.

Schools can get started with a basic web camera, a display screen, an Xbox controller, Lego bricks, and Intel® Core Processors.

“When #AI is used appropriately and with the correct strategy, it amplifies existing resources to be able to assist students in a one-to-one fashion, where the pace is tailored to individual needs” – Stephen Borg, meldCX via @insightdottech

Intel® OpenVINO Optimizes AI Development

The team at meldCX overcame several challenges to design the AI Playground. “The playground needed to detect similar objects of varying size, so we had to create and train models for each individual part,” says Borg. “And our partnership with Intel was pivotal to navigating the development process.”

The Intel ® OpenVINO toolkit plays a significant role in developing innovative object detection and vision solutions in several ways:

  • Hardware-agnostic design enables developers to choose what best suits their specific application by supporting a wide range of accelerators, including CPUs and GPUs.
  • Optimization tools help developers reduce model size and memory footprint, accelerating inference for deep learning models in object detection and vision solutions.
  • Inference at the edge brings more object detection and vision solution deployment flexibility at the edge, where latency and bandwidth constraints often demand local processing.
  • Different front-ends provide for integration with popular deep-learning frameworks like TensorFlow and PyTorch. Models trained with these frameworks can take advantage of OpenVINO optimization and deployment capabilities.

“OpenVINO allows us to deploy multiple concurrent models on a single edge device, optimize those models without compromising performance, and produce smooth, real-time previews of the detections,” says Borg. Its optimization and accelerated inference programs also allow solutions like meldCX to support expensive GPUs to run models on CPUs, allowing for school programs with a limited budget to be able to afford quality AI education solutions.

The Future of AI in the Classroom

“Our mission is to participate in the ethical practice of AI and bring it to life. We are committed to use AI as a tool to augment human capabilities,” says Joy Chua, Executive Vice President of Strategy and Development at meldCX. “As we’re moving toward a more digital future, it is important for us to empower our students to be first-class innovators, supported by first-class AI.”

AI Playground has the potential to shape the future of education and build foundational knowledge early on for the next generation of developers. For instance, AI Playground gives students a peek into how AI works while also learning the subject of the game. “It’s important to expose students to AI and inspire them to see possibilities,” says Raymond Lo, AI Software Evangelist at Intel. “Computer vision is especially exciting because it helps kids make sense of a scene and then detect and extract information.”

And that’s a lesson educators take to heart.

 

This article was originally published April 20, 2023. 

This article was edited by Leila Escandar, Editorial Strategist for insight.tech.

Revolutionizing Cancer Research with Healthcare AI Tools

The medical field has a tough job when it comes to cancer—two tough jobs, actually: treating patients and advancing research that will prevent and combat illness. On top of that, longer lifespans ensure an ever-growing stream of those patients to treat. Fortunately, advancements in healthcare AI are aiding the evolution of cancer research and treatment, and those advancements are the result of some amazing partnerships between the healthcare and technology industries.

Let’s take a closer look at one of those partnerships, represented by Dr. Johannes Uhlig, Assistant Professor of Radiology and Head of the AI Research Group at UMG (University Medical Center Göttingen) in Germany; and Dr. André Aichert, Research Scientist at the Artificial Intelligence Germany Department of Digital Technology and Innovation at Siemens Healthineers, who are collaborating on a project called Cancer Scout. Together, they’re bringing technology to the clinic and clinical experience back into the tech, for the benefit of us all (Video 1).

Video 1. Inside the challenges and opportunities for healthcare AI with Siemens Healthineers and UMG Göttingen. (Source insight.tech)

Which advancements in AI have transformed cancer research?

Dr. André Aichert: The impact of AI on the entire healthcare space cannot be understated. As a company, Siemens Healthineers has been around pretty much since the first X-rays were taken; we’ve been improving things, like detail and number of images, ever since. At the moment, the use of AI gives clinicians the opportunity to deal with the vast amount of images and data available to them, and to get the most out of them. Clinicians who start using AI solutions today are at the state of the art of the technology, and they’ll have all the advantages of that.

For the Cancer Scout project with UMG, we’re analyzing a host of different data sources—not just images—trying to define and detect subtypes of cancer. For the radiology subproject, which is where Dr. Uhlig comes in, the question is: Can we recognize certain subtypes of cancer based on radiology images? If we could, then an invasive biopsy might not be required. In general, we’re trying to optimize workflows, to prevent unnecessary invasive procedures, and doing data analysis of all sorts.

Dr. Johannes Uhlig: There are several current challenges in clinical cancer imaging. For one, we have an aging population, and so we see an associated increase in healthcare needs. Additionally, radiology imaging has become more broadly available in the last few decades and is more frequently used. For cancer imaging particularly, we face a massive increase in case numbers, and I believe those images can’t all be assessed by radiologists in the traditional way.

For example, for breast cancer screening in Germany, we use X-ray-based mammography. The most recent data, from 2018, reports that 2.8 million women got mammograms, and 97% of those scans were negative. The current setup is that every mammogram has to be independently assessed by two experienced radiologists. But we have literature suggesting that AI-based methods for mammography assessment are at least comparable in their diagnostic accuracy to those of radiologists. So could we use AI algorithms instead of that second radiologist? From both an ethical and an economic standpoint, is it even imperative to do so?

And breast cancer screening is only the tip of the iceberg. For cancer research, I believe AI is the buzzword of the last decade. For example, our research group has been focused on extracting additional information from CT and MRI images to guide clinical decision-making in patients with suspected kidney or prostate cancer. In the Cancer Scout project with Siemens Healthineers, we use AI algorithms and the syngo.via software to correlate radiology CT imaging of lung cancer patients with a pathology analysis in a large-scale cohort. And we hope that one day these AI algorithms will advance the role of radiology imaging in guiding the lung cancer treatment.

How does software aid your research, particularly compared to traditional approaches?

Dr. André Aichert: First of all, I should explain some of the practical problems of doing research in the clinical environment. First, we are dealing with personal information, so you have to be very careful about that because of GDPR in Europe or HIPAA in the US. Just accessing data and getting to the point where you have the basis for AI algorithms is a much bigger process than you might imagine.

Then, most of the successful algorithms are supervised, which means you need to collaborate with clinicians to give you annotations and give you an idea of what you’re actually looking at in order for the algorithm to reproduce the findings. Therefore, it’s critical to actually get access to this data. But the clinical IT landscape has become scattered between different vendors and departments or sites over time, and sometimes these systems do not communicate. Collecting and harmonizing the data from these systems is actually a lot of work and can be very painful at times.

For example, you have your favorite free program on GitHub, and you just want to run it over some data. You have to make sure you’re even able or allowed to use that software. Then you have to make sure that the data you’re using it with is anonymized. You anonymize it, export it, and copy it over to a different computer where you’re running the software. Then you have to make sure that it actually is anonymized. Then you get your results. But then you have to go back to the original system and reintegrate those results, potentially even with additional information from other IT systems. This is all very different from what I’m used to as a researcher, or as an actual user of IT.

Then, even if you’ve trained your first models and you want to test them on real-world data, that can also be a problem. You run the risk of having your development team work on a clinical use case and develop beautiful software before they’ve gone through the effort of actually releasing it to the clinicians. Then they try it in the real world and all of a sudden realize that there was a very basic assumption that was wrong.

Then you’ve got a problem, and what you want to do is—like in the Silicon Valley—to fail fast. You want to be able to have an early prototype that probably doesn’t exactly solve the problem yet, but it’s something you can bring to the clinician, get feedback, and then shorten this feedback loop. And that is one place where syngo.via Frontier certainly helps.

Basically, the syngo.via Frontier research platform tries to help in all of these steps along the way that I just explained. It’s an end-to-end integrated solution. If you have a syngo.via installation running at the clinic, you can run it on the data that’s available in your packs and download applications from a marketplace that exists in that system. And that is a very, very big advantage over just getting your own software and trying to integrate it somehow with the process that I described.

“The use of #AI gives clinicians the opportunity to deal with the vast amount of images and #data available to them, and to get the most out of them” – Dr. André Aichert, @SiemensHealth via @insightdottech

How are you working with Siemens Healthineers and its research platform?

Dr. Johannes Uhlig: The syngo.via software is deeply embedded in the clinical workflow in our department. For example, we use it for all cardiac CT scans, for coronary-vessel identification, or as an on-the-fly image viewer and reconstruction software in trauma patients. It really performs robustly in all these scenarios. We also had four researchers working full time for several months on the Cancer Scout project—where we had several thousand patients and the project had to run smoothly—and we used it for data accrual, annotation, and supervision.

For me, it’s crucial to have a one-stop shop; I want to use as few software tools as possible for my whole data pipeline. With syngo.via we have one piece of software, and we can extract data from our imaging database. We can annotate cross-sectional images, and we can anonymize these cases in a way that is compliant with the strict German regulations.

What is the importance of working with the university on this project?

Dr. André Aichert: Cooperation is absolutely essential. If we didn’t have clinical researchers, like Dr. Uhlig, who are willing to cooperate with us and share their knowledge—and also to explain the problems that they have—then it would be very hard for us to make any progress in this field. As an AI researcher I also have to understand a little bit about the clinical problem at hand.

But it’s just as important that the usage of the software corresponds to what clinicians do in their routine, and that the presentation of the data is done in such a way that a physician can actually tell types apart, or tell locations and geometry apart. So you have to come together and define an annotation protocol that actually makes sense.

What do you envision for the future of this project?

Dr. Johannes Uhlig: It’s crucial that these AI algorithms we build are assessed and trained in a clinical setting. They have to work on suboptimal scans; they have to work with different scanner types, different patients. But, as André said, there’s also the acceptance by radiologists and clinicians. What is the best way to present the AI results? How should they be visualized? How are outliers reported? But given the mutual trust, I believe that UMG and Siemens Healthineers as partners will find ways to address these challenges.

Dr. André Aichert: One of the essential next steps for the models would certainly be to look at other sites, and scalability is key in this regard. We’ve already used a solution called teamplay to collect the data from UMG, and it could also be used to collect data from other sites that has been produced in a similar manner. That would allow us to integrate or support different IT infrastructures in different locations that may be very different from those at UMG.

What final thoughts would you like to leave us with?

Dr. André Aichert: The medical domain is a really exciting field for AI researchers. You have this very diverse set of problems, this very diverse set of modalities and images. You also want to be able to share knowledge and data and drive collaboration in all sorts of medical disciplines supporting this iterative process in order to ultimately develop and deploy applications.

Dr. Johannes Uhlig: For me as a clinician, I believe AI really is the future. I guess there’s no way that we can work without the application of AI algorithms within the next 10 years, just given the caseloads. And also I have to underline that AI research really is a team effort. We need these collaborations between academic institutions like UMG and manufacturers like Siemens Healthineers to advance healthcare, especially given what is at stake in cancer imaging. And only through this ongoing mutual feedback, adjustment, and fine tuning will we create AI tools that are not only accurate but also accepted by healthcare professionals.

Related Content

To learn more about healthcare AI transformations, listen to Healthcare AI for Cancer Research: With Siemens Healthineers and read The Doctor Will View You Now. For the latest innovations from Siemens Healthineers, follow them on Twitter and LinkedIn.

 

This article was edited by Erin Noble, copy editor.

Healthcare AI for Cancer Research: With Siemens Healthineers

Healthcare AI has become a powerful tool in the fight against cancer, allowing researchers to analyze vast amounts of data and make new discoveries faster than ever before.

Join us as we learn about the innovative tools and technologies making cancer research and treatment improvements possible. We look at how AI is being used to identify new cancer treatments, predict patient outcomes, and how these systems can ensure personal information remains safe. We also discuss challenges and opportunities of using AI in cancer research, and how this technology transforms the way we approach cancer treatment and prevention.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guest: Siemens Healthineers and UMG Göttingen

Our guests this episode are André Aichert, Research Scientist at the Artificial Intelligence Germany Department of Digital Technology and Innovation at Siemens Healthineers, and Dr. Johannes Uhlig, Assistant Professor of Radiology at the University Medical Center Göttingen.

Prior to joining Siemens Healthineers, André already had a strong interest in medical imaging, image processing, computer vision, and artificial intelligence. And since the company is known for providing medical imaging devices worldwide, he joined after achieving his PhD so he could work to not only find problems in this area but solve them.

In addition to being an Assistant Professor at UMG Göttingen, Dr. Uhlig was a Visiting Research Scholar at the Yale University School of Medicine from 2017 to 2022. Dr. Uhlig’s main interests are in clinical cancer research and radiology, which he hopes will help him improve clinical patient care in the future.

Podcast Topics

André and Dr. Uhlig answer our questions about:

  • (5:15) Recent AI advancements transforming healthcare
  • (8:47) Importance of technology and healthcare provider partnerships
  • (13:04) Advanced research platforms versus traditional platforms
  • (19:01) Using technology for cancer research and clinical purposes
  • (21:58) Additional industry partners powering AI cancer research
  • (25:35) The future of technology collaborations and cancer research

Related Content

To learn more about healthcare AI transformations, read The Doctor Will View You Now. For the latest innovations from Siemens Healthineers, follow them on Twitter and LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re talking about the evolution of cancer research and treatment thanks to advancements with AI with Dr. Johannes Uhlig from UMG in Göttingen, Germany, and Dr. André Aichert from Siemens Healthineers. So, excited to jump into this conversation today, but would love to learn a little bit more about our guests. So, André, I’ll start with you. Please tell us more about yourself and your role at Siemens Healthineers.

André Aichert: Yeah, it’s a pleasure to be here. So, my name’s André Aichert. So, I’m a research scientist at the Artificial Intelligence Germany Department of Digital Technology and Innovation at Siemens Healthineers, and my background is computer science with a focus on, let’s say, medical imaging, image processing, computer vision, and artificial intelligence.

So, I was excited about images and graphics for a long time, even as a kid. And during my studies I found that actually the analysis of images has a lot more relevant and challenging issues than just the generation of graphics, so that’s why I went into this direction. And as I was looking for an application for the things I was learning at university, the medical field just seemed very attractive because it has a lot to offer. It’s a great choice because it has a lot of data, a lot of problems to be solved, and it also seems generally beneficial to society. So that’s what motivated me.

And Siemens Healthineers as a company was very attractive because it builds about a quarter of the medical-imaging devices worldwide. So it’s a really great opportunity to find problems to solve them. And that’s why I joined DTI as a researcher after my PhD. And we just have a wide range of researchers for all sorts of fields, modalities in medical imaging and artificial intelligence, and it’s an exciting environment to be in. We bring together a lot of the technology, hardware processes, and all for AI. So, it’s what we call the AI factory, and that’s why it’s absolutely the right place to be if you want to do research in this direction.

Christina Cardoza: Excited to dig more into that. But, Dr. Uhlig, let’s get you in here. Please tell us more about yourself. What is UMG, and what do you do there? And also what your relation is to Siemens Healthineers?

Dr. Johannes Uhlig: Absolutely, Christina, thank you so much for having me. It’s a great pleasure for me. So, my name is Johannes Uhlig, and I’m Assistant Professor of Radiology at the University Medical Center in Germany, in Central Germany. So, throughout my medical studies in Germany I have been really interested in clinical cancer research. I’ve worked side jobs at the German Cochrane Centre, even the thoracic surgery department at my university. And following this real passion I completed a master of public health program with concentration in biostatistics and epidemiology at Harvard University after my med studies.

So, I started my residency in radiology in 2016, because this, for me, really is the medical subspecialty that served as most crucial gatekeeper for all relevant clinical decision-making—being emergency medicine, trauma care, cancer diagnosis, treatment-response assessment, or minimally invasive cancer treatment. And what really drives me is employing my background in biostatistics and epidemiology, and combining this with my passion for radiology to ultimately improve clinical patient care. And I’m not a full-time researcher, but I’m rather a clinical radiologist with a really strong research background.

And so, since 2019 I’m heading the AI research group at our Department of Radiology at UMG. And since my wife, and co-head of the research group, is a urologist, we mainly focus on urology cancer research: including kidney cancer, bladder cancer, prostate cancer. So far we have employed AI models for cancer detection and assessment in various settings. And we recruit several hundred patients each year for our ongoing studies. We currently also apply our research expertise to other cancer types.

So, our Department of Radiology has a very strong and lasting relationship with the Siemens Healthineers. We mainly rely on Siemens scanners in our whole department, and we have completed several research projects in the past.

Christina Cardoza: Great. It sounds like this is something that’s very passionate to both of you. And I think that that’s very important when you’re dealing, especially, with this type of research, and really to make the—find the opportunities and make it beneficial to doctors and patients out there. So, I want to start off the conversation just looking—getting the state of cancer research and cancer care today. And, André, I’ll pose this question to you first. What are the advancements you’ve seen in AI that have really been transforming this space for both doctors, researchers, and patients?

André Aichert: Yeah, I mean, the impact of AI to the entire healthcare space cannot be understated. So I’m sure there are lots of examples in this domain. I mean, as a company we’ve been around pretty much since the first X-rays were taken, and we’ve been improving detail and also the amount of images ever since. So, I believe that for, at the moment, the most important use of AI would be to give clinicians and our customers the opportunity to actually deal with this vast amount of images and vast amount of data, and to support them in getting the most out of it.

But certainly, like everywhere else in the healthcare space where data is available—which is basically everything—there would be AI solutions that could be added to this. So, I’m looking at this from an imaging direction, but this certainly doesn’t mean that it would be limited to that. So, I think that generally I say AI solutions, they’re essential; and I guess clinicians that start using them today and are the state of the art of the technology—they’ll have the advantages, and it’ll give them a lot of power in this sense.

As for the Cancer Scout project—which is why we’ve been incorporating with UMG Göttingen now in my particular project—with there we’re actually analyzing a host of different data sources. So it’s not just images actually. We work with the pathology department there to analyze sections of tissue under the microscope, but also we have “omics” data, so genomics and also proteomics data. And in all of that we try to define subtypes of cancer and also to detect them, and figure out which subtypes we can actually see in the images so that we perhaps do not need to look at the proteomics data, for example, which is a lot more unusual and difficult to obtain.

So this project is actually a €10 million project that’s mostly placed at Göttingen, and so we work on a relatively small part of that, for the data analysis, and also the image analysis specifically. And we’re trying to just define subgroups that make sense. That’s basically the goal of this.

So, for the radiology subproject of it, which is where Dr. Uhlig comes in, we actually have a slightly different setup, because there the radiology images are supposed to be the original source of images, and there you can detect nodules already in CT images. And then the question is, can we also recognize certain subtypes already based on these radiology images, which then would not require, for example, a biopsy, like an invasive biopsy to actually be done. And that’s I think where this is going. So, trying to optimize workflows, trying to prevent unnecessary invasive procedures, and data analysis of all sorts.

Christina Cardoza: Absolutely. I love hearing projects like that, where you’re teaming up with a bunch of experts from different areas to really make a change and make an impact, to see the challenges and start creating solutions around them. And so, Dr. Uhlig, I’d love to hear from a cancer research and a clinical perspective what the benefit for you guys of being part of this project has been, as well as the opportunities that you have been seeing with AI-based healthcare solutions in this space. 

Dr. Johannes Uhlig: Absolutely. So, from my perspective there is several current challenges in clinical cancer imaging. For example, we see demographic changes with an aging population in most countries and associated increase in healthcare needs. Additionally, radiology imaging has seen technical advancements over the last decades, and it’s more broadly available and more frequently used than maybe 20 years ago. For cancer imaging particularly we face a massive increase in case numbers. And I believe that can’t be assessed by a radiologist in a traditional way. Let’s call it that way.

For example, breast cancer screening in Germany—we used X-ray based mammography, which is recommended biannually for women aged 50 to 69. And the most recent data—it’s coming from 2018—reports that 2.8 million women got a mammography, and 97% of these scans were negative scans. And the current setup is that every mammography has to be independently assessed by two experienced radiologists. And at the same time we have literature suggesting that AI-based methods for mammography assessment are at least comparable in their diagnostic accuracy to radiologists. So these AI algorithms really provide a standardized and reproducible second reading for mammographies.

Now, the question that comes into mind is, can we use these AI algorithms instead of the second radiologists? Or is it even imperative to implement these AI algorithms in a setting, from both an ethical and an economical standpoint? And I believe breast cancer screening is only the tip of the iceberg. At least in Germany we will probably see other cancer screening programs, like low-dose chest CT, and these will put further strains on our clinical-imaging community. And I strongly believe that AI algorithms will support radiologists within this decade—for example for standardized cancer imaging.

As going for cancer research, I believe AI is kind of the buzzword for the last decade, and a lot of excellent research has been published. In particular, AI algorithms are used for assessment of cross-sectional imaging such as CT or MRI. For example, our research group has so far focused on extracting additional information from CT and MRI images to guide clinical decision-making in patients with suspected kidney or prostate cancer.

And now, in the corporation project with Siemens Healthineers and the Department of Pathology here at UMG, we use AI algorithms to correlate radiology CT imaging of lung cancer patients with a pathology analysis in a large-scale cohort, and for that we are using the syngo.via software—as aforementioned, this large Cancer Scout project. And we hope that one day these AI algorithms will advance the role of radiology imaging in guiding the lung cancer treatment.

Christina Cardoza: Absolutely. I know a lot of people in the medical field—they are overworked, there’s a lot going on, and across the entire IoT space I see just a lack of skills out there. So bringing AI in, making it stable, and having the confidence that it can actually help make diagnosis or advance this research—it sounds great, because it’s going to take some of the strain off of the medical industry, but also give more confidence in the patients when they get and receive their care.

André, I’m wondering—I saw on Siemens that you guys have a research platform. And I know this is probably just one small piece of the puzzle to this larger cancer research project and efforts out there, but I would love to hear a little bit more of the syngo.via Frontier research platform—how this fits in in your efforts, what are the pain points this is trying to solve, and how research platforms like this and how bringing AI in really compares to the traditional way of doing things.

André Aichert: Yeah. I think to answer the question I would have to explain some of the specific, practical problems if you want to do research in the clinical environment first of all. So, we are dealing with personal information before all, right? So you have to be very careful about, for example, GDPR in Europe or HIPAA in the US. So, special care has to be taken about that. And that also means that just accessing data and getting to the point where you have the basis for AI algorithms is a much bigger process, or sometimes a bit more difficult than you’d imagine.

And there are practical problems associated to that as well. So, still to this day most of the successful algorithms are actually supervised. That means you need also to collaborate with clinicians to actually give you annotations and give you an idea of what you’re actually looking at for an algorithm to then reproduce these findings, so to say. And therefore it’s critical to actually get the access to this data, and to also to create those annotations.

And then if you look at the clinical landscape, the IT landscape is actually scattered. It’s not just between different vendors and departments or even sites if you want to do large-scale research, but also it has organically grown over decades. And sometimes these systems, they do not communicate in the way that maybe you’d expect from today—your phone. And in order to collect and harmonize the data from these systems is actually a lot of work and can be very painful at times.

So, for example, if you have your favorite program on GitHub that is free to use and you just want to run it over some data, that can actually be a process at the clinic. You want to make sure that, first of all, the data that you’re using it with is anonymized. To access the data you typically go to—say, if it’s image data—to your packs. You anonymize, export it, copy it over to a different computer where you’re running the software. Then you have to make sure that it actually is anonymized. Otherwise, if there is a risk of re-identification you have to make sure that you’re even able or allowed to use that software. Then you get your results, but if you want to take them back you also have to go back to the original system and reintegrate them, potentially even with additional information from other IT systems.

So this is very different from what I’m used to as a researcher or as an actual user of IT in my daily business, so to say. It’s a challenge sometimes, and then even if you’ve trained your first models and you want to test it on real-world data, that can also be a problem because then you have to find a way to get the initial prototypes back to the clinic and run them. And that’s actually a challenge that I’m currently facing also in other projects—that there is a very long process of actually releasing such a prototype to make sure it’s safe. You have all sorts of licensing issues. For example, a modern web application—it’s a very complex toolchain associated to building a website, for example. If you’re using anything like that on your platform you have to make sure that cybersecurity is okay, and all of these things.

So, basically in all of the things that I just explained the syngo.via Frontier research platform tries to help in all of these steps along the way. It’s an end-to-end integrated solution. So, basically it is running at the clinic, if you have a syngo.via installation, basically it is free to use. And it uses all the existing infrastructure, so to say—that you can run it on the data that’s available in your packs, you can download applications from a marketplace that exists in that system—that has third party and also Siemens applications that you can use, research applications. For example, that’s what we did for Cancer Scout for the annotation process.

And because it is using all the infrastructure that’s already in place, also the installation of it basically means a similar effort to updating the existing software. You just have to briefly connect to the internet, download the application—you can use it. And that is a very, very big advantage over just getting your own software and trying to integrate it somehow with the process that I described earlier.

And, thinking further into the deployment of the first prototypes, it becomes even more important because, as I described, the process of releasing such a prototype to the clinic is actually a long one, and you’re running the risk of having your development team working on a clinical use case, developing beautiful software before they go through the effort of actually releasing it to the clinicians, and then trying it in the real world and then all of a sudden you start realizing that there was a very basic assumption that was not exactly what it should be in practice. And then you’ve got a problem, and what you want to do is what—well, in the Silicon Valley, guys should be familiar with it—is to fail fast. And that is actually meaning that you want to be able to have an early prototype that probably doesn’t exactly solve the problem yet, but bring it to the clinician, get feedback, and shorten this feedback loop. And that is also one thing where syngo.via Frontier certainly helps.

Christina Cardoza: Yeah, I love that idea of fail fast. I think when people are doing things like this it’s always a worry that you are going to fail, but that helps you improve and continuously learn, and that’s part of the process, and that’s important. I also love that you brought up data privacy in the beginning, because everybody wants to know how their data is being protected—especially when it comes to medical and healthcare data it becomes even more sensitive.

So it’s great to see that that’s something top of mind for Siemens Healthineers in all of this. Dr. Uhlig, I want to hear from your perspective—just being part of this Cancer Scout program, and also the importance of this syngo.via Frontier platform. How did you hear about Siemens Healthineers? How did you guys come together to start to working on this, and how have you been utilizing their research platform for your own research and clinical purposes?

Dr. Johannes Uhlig: Absolutely. So, at our Department of Radiology the syngo.via software, the clinical software, is deeply embedded in our clinical workflow. For example, we use the syngo.via software for all cardiac CT scans, for coronary-vessel identification, or as an on-the-fly image viewer and reconstruction software in trauma patients. So, based on my personal experience over the last several years, the syngo.via software really robustly performs in these scenarios. There’s rarely a case where I have to redefine coronary vessels, and even in a time-critical setting, such as trauma care, the software robustly supports my clinical needs.

And so, when we had the chance to collaborate with Siemens Healthineers in the Department of Pathology here at UMG there was really a low threshold for us to use syngo.via software for data accrual, annotation, and supervision. Just to give you perspective, we had four researchers working on this project on the Cancer Scout project for several months in full time. We had several thousand patients, and the project had to run smoothly with all these people and all these patients. And for me as a radiologist, it’s crucial to have something I want to call a one-stop shop. I want to use as few software tools as possible for my whole data pipeline. And indeed the syngo.via Research Frontier, the plugin for the clinical software we’re using, is an excellent choice to my end.

To go a little bit into detail: with the syngo.via we have one software and we can extract data from our imaging database. It’s easy because it’s already clinically embedded and we can use it as a research tool. We can annotate cross-sectional images, and we can anonymize these cases in a setting that is compliant with the strict German regulations and laws. Surely there are some areas where syngo.via Research Frontier software could be optimized—such as the graphic user interface or the ease of data management in larger studies—but from our experience in this project I really believe that syngo.via is an excellent software for at least our specific research needs, and we will continue to use this in other projects in the near future.

Christina Cardoza: So, one thing that seems clear throughout this conversation is there are many opportunities out there, and it’s not something that one single organization can do alone. It really takes an ecosystem and partnerships to find those opportunities and really start solving these real-world challenges and use cases. And I should mention the IoT chat as a whole and insight.tech, we’re sponsored by Intel®. So, André, I’d love to learn, from a Siemens Healthineers perspective, what the importance has been about working with the university on this project, as well as what the value of your partnership with Intel and that technology is in all of this.

André Aichert: This is completely right—exactly, Christina. I cannot agree more. So, cooperation is absolutely essential. So, if we didn’t have clinical researchers like Dr. Uhlig who are willing to cooperate with us and share their knowledge and their understanding, also to explain the problems that they have, then it would be very hard for us to do any progress in this field. And this doesn’t go just to one department at the clinic. Actually you have to look at pathology; you have to look at, certainly, the oncology department; you have to look at radiology; you have to integrate all sorts of other departments from the clinic. Say, if you were doing prostate cancer then certainly you’d want to have a urologist available.

And communication is absolutely key here. And as an AI researcher, certainly, the learning that we take for all the use cases and the clinical routine is absolutely essential for us to address the right problems in the right way. So, to produce an actual technical solution, first of all, as an AI researcher I also have to understand a little bit of the clinical problem at hand. So this is absolutely essential.

And also a lot of the questions cannot be answered by one party alone. For example, if we are discussing with Dr. Uhlig which software do we want to use for the annotation, then it doesn’t suffice for us to understand what sort of data format we need out of this process so that we can work with it with our models, say.

But it’s just as important that the, for example, the usage of the software corresponds to what they do in the clinical routine, that the presentation of the data is in such a way that a physician can actually tell types apart, or tell locations and geometry apart, and then you have to come together and actually define a annotation protocol that makes sense. And for us the cooperation with UMG in particular was very, very pleasant in that regard. And, yeah, I just cannot stress enough how important it is to have this cooperation and to have this conversation in order to prevent a disconnect of technology and application in this domain.

Christina Cardoza: Absolutely. And when you’re talking about making sure you have the data available and you’re able to analyze it, all of the training and model development that you have to do, and this requires a lot of speed and performance—I assume that’s where Intel is coming in with their software kit, like the OpenVINO software kit and also the processors that they have available. Has that been beneficial to the university and to this project at Siemens?

Dr. Johannes Uhlig: Arguably I’ve only talked about the creation and the research of the AI models, but in order to actually create value obviously you’d have to use it in the clinic on real patients. And at latest at that point the typical clinical workstation would be running on an interprocessor, and that’s really where then finally people would have a benefit from the hard work that we’ve been doing before. So that’s definitely where we’d come in.

Christina Cardoza: Great. And, Dr. Uhlig, you mentioned this is really just hitting the tip of the iceberg. There are many different types of cancer out there, many different facets that go into cancer research, many different people and organizations and just fields that you can break off into this. So, I’m wondering, what do you envision for the future of this project, your collaborations and cancer research?

Dr. Johannes Uhlig: Yeah, absolutely. So, the Cancer Scout project is ongoing as we speak, and I think there’s really much more to be done. In particular, given my background as a clinical radiologist, I strongly believe that it’s crucial to assess these AI algorithms we build and train in a clinical setting. So, basically, it has to work. It has to work on suboptimal scans where the quality is not perfect. It has to really work robustly with different scanner types, different patients. And this is not only, this testing process, is not only including the accuracy of the AI algorithm, but also the acceptance by radiologists and clinicians. For example, how do we best present the AI results? How do we visualize these, and how do we confer the associated uncertainties or report outliers? But I guess, given the mutual trust that was earned throughout the productive cooperation within the scope of this project, I believe that UMG and Siemens Healthineers as partners will find ways to address these challenges and also opportunities.

Christina Cardoza: Absolutely. Lots still to be done. So, André, from a Siemens Healthineers perspective, what do you envision for the future of cancer research, and your collaboration and ongoing projects?

André Aichert: Yeah, definitely. I mean, I can only agree with Dr. Uhlig, and one of the essential next steps for the models that we produce there would certainly be to look at other sites. So, scalability is key in this regard. And, yeah, we’ve already used a solution called teamplay to collect the data from UMG, and that could also be used to collect data from other sites that have been produced in a similar manner. And that is basically what allows us to also integrate or to also support different IT infrastructures in different locations, which may be very different from what UMG is doing.

And the other thing is, basically, that I would like to mention for AI—you should be careful with overreached expectations, because actually there’s a lot of work to be done in this domain. If you want to optimize the last few percents of performance, which is very relevant in the medical domain—because it basically goes down to, well, patients who are these few percents, and who then get a correct or not correct diagnosis—then you have to really put a lot of work in, and you’re in for the long haul of improving these models and having this feedback loop that we discussed earlier. So, certainly platforms like Frontier also support us in this iterative process, and also integrate other centers in this.

Christina Cardoza: That’s a great point. AI, it’s great. There’s been many advancements, and there’s many opportunities and benefits that come along with it, but, especially in a research project like cancer research, be aware of what AI can actually do, and, like you said, set your expectations. I can’t wait to see what else comes out of these ongoing collaborations. Unfortunately, we are running out of time, but before we go I just want to throw it back to each of you one more time for any final thoughts or key takeaways you want to leave our listeners with today. So, André, I’ll start with you again.

André Aichert: Basically the takeaway messages for me would be that the medical domain is a really exciting field for AI researchers that I can only recommend because you have this very diverse set of problems, very diverse set of modalities and images, and also other electronic health records that you can work with, and it actually is very beneficial to you to look at this field. And also it’s very exciting to share knowledge and to also see other people work, for example, at the clinic. It’s a very, very exciting field, and you always keep learning new things.

And then obviously, for the practical part, you need research platforms that support this lifecycle that, basically, we’ve discussed over this podcast. And that you want to be able to share knowledge and data, but also drive collaboration in all sorts of medical disciplines supporting this iterative process, and ultimately develop and deploy applications. So, yeah, that would be my takeaway.

Christina Cardoza: Great. Dr. Uhlig, anything else you’d like to add?

Dr. Johannes Uhlig: Yeah, absolutely. I can agree with André on almost all these talking points, and I just want to stress that for me, as a clinician, I believe AI really is the future. I guess there’s no way that we can work without application of AI algorithms within the next 10 years just given the caseload, given the economical stress we have. And also I have to underline that AI research really is a team effort. We really need these collaborations between academic institutions like UMG and manufacturers like Siemens Healthineers to advance healthcare, especially given what is at stake in cancer imaging. And only through this ongoing mutual feedback, adjustment, and fine tuning I believe we will create AI tools that are not only accurate, but also accepted by healthcare professionals.

Christina Cardoza: Absolutely. You know, I love that this takes a team, and so I can’t wait to see what else comes out of all of this and I’m excited to learn more. So, thank you again for the insightful conversation and joining us today. I would invite all of our listeners to visit the Siemens Healthineers website, as well as the UMG website where Dr. Uhlig is from so that you can learn more and keep up to date with their progress on the research project. So, thank you both for joining us today, and thank you to our listeners for tuning in. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Sustainability Initiatives Fuel EV-Charging Infrastructure

Think about the last time you filled your car with gas: you pulled into a station, likely entered your credit card into a self-checkout terminal, topped off the tank, and drove away. Fueling a gas vehicle on the road has followed such well-oiled rules with infrastructure that has been in place for decades.

Electric vehicle (EV) owners are looking for the same level of convenience: easy-to-use options that are safe and always available. At the same time, businesses and communities are looking for ways to offer this convenience to create new service and revenue opportunities. But that’s easier said than done.

The Road to Sustainability Initiatives and EV Charging

Thus far, the EV market has been chugging along only in fits and starts. Mass adoption depends in part on a robust EV-charging network. EOS Linx, an EV-charging company, promises to deliver that network through its EOS Charge solution, a series of well-lit charging stations, integrated with a Digital-Out-Of-Home (DOOH) advertising display.

EOS addresses many of the challenges in today’s EV infrastructure. The company is working to build an intelligent network of EV-charging locations that seamlessly integrate with the EV driver’s lifestyle while easing range anxiety. By installing and managing the charging infrastructure at a location, EOS mitigates the roadblocks of cost, expertise, and business distraction for location operators. “Customers will find themselves at a safe and easy-to-use solution for an overall better experience,” says Jeff Hutchins, President and CTO at EOS.

The company’s approach is based on the knowledge and expertise earned through a long history of deployments, plus the expertise of our partnership ecosystem. “We can acquire the real estate, we can build the network, and we can design a flexible architecture that really enhances the customer experience,” Hutchins says. “We wanted to offer a more comprehensive and intelligent approach rather than simply build a better widget.”

Charge units can integrate power delivery and management, digital advertising, #AI analytics, and proactive troubleshooting into one fix-it-and-forget-it #EV charging solution. EOS Linx via @insightdottech

Opportunities for Retailers and Communities

EOS bakes its all-things-considered approach into what is essentially an edge infrastructure instrument. EOS Charge units can integrate power delivery and management, digital advertising, AI analytics, and proactive troubleshooting into one fix-it-and-forget-it EV-charging solution.

The company’s revenue comes from a percentage of the fee for every vehicle charge, and from DOOH advertising that runs on the digital display integrated into the charging station.

Consumers benefit from accessible, clean, and well-lit facilities on the road. Residential installations from EOS are also available, and customers can schedule charging windows during off-peak hours—helping them lower net costs by aligning with their utility provider programs.

The EOS charging stations benefit retailers in many ways, Hutchins points out. They can channel targeted advertising through the digital display, and drive traffic to their primary points of business. The location owner can also sell the digital advertising inventory to their partners and vendors. “I’m excited about the potential for cross-pollination of marketing messages between the driver and our commercial loyalty-program participants, such as ‘dine at a restaurant, get a free charge.’”

In addition, location operators appreciate the hands-off nature of the setup. “We bring our own power, which means we’re not going to impact your operational infrastructure,” Hutchins explains. “We bring our own network-connected infrastructure, our own hardware. We own, we operate, and we maintain.”

Such an arrangement is music to the ears of operators, who are only too happy to have EOS do all the heavy lifting. As mandates for EV-charging stations in communities increase, governments and property owners are eager to have an expert attend to the details, while leaving the road open for recognizing the additional benefits of EV adoption.

The company is an active supporter of communities, including a partnership with the National Center for Missing & Exploited Children. Through the displays, EOS brings added functionality like weather warnings, community messaging, and alerts for missing children.

Edge Data Analytics Add Value

The EOS solution has the capacity to implement AI visual analytics, which can deliver anonymous insights about how long customers linger at the charging station. “This dwell time and demographic data is extremely valuable information for location owners,” Hutchins states.

As public safety is also vital, AI video analytics and safety integration can help prevent potential issues. For example, the system can follow suspicious activity patterns and identify when someone is going to the back of the shop. This is still being better understood, and must be done in conjunction with public entities in a way that ensures safety while respecting individual privacy rights.

Because the EOS solution is modular, it will adapt to changing market needs and supply chain constraints, Hutchins ensures. “Architecting with Intel compute at the edge has allowed us to efficiently manage content and enable programmatic advertising. And the remotely managed system makes maintenance a lot easier and less expensive, while providing remote survivability features that would otherwise be impossible.”

Intel’s edge computing capabilities have helped EOS comply with data-handling requirements. “It allows us to keep that arm’s-length relationship from personally identifiable information,” Hutchins says. “All data is processed anonymously. EOS does not store or transmit images, video, or violate existing privacy protocols.”

On the Road to a Sustainable Future

According to Hutchins, EVs are going to become ubiquitous and for everybody. And when they do, consumers will expect charging on the road as a given. “There was a time when you couldn’t charge your phone on the plane, but now consumers get mad when there’s no plug-in onboard,” he says. “That will become the charging behavior for vehicles.”

“Certainly, on the interstates, and if you’re a business that profits from traffic, you’ll be expected to have a charging station,” says Hutchins. “Having a partner like EOS to pave the way for easier adoption makes it a smoother ride.”

“EV is more than a cool new thing. It’s about infrastructure with a point,” Hutchins says. “It’s meant to be something that brings us to a better state, as people doing better things for the planet.”

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Supply Chain Transformations Take Center Stage

The pandemic was probably the first time the layperson got intimately familiar with supply chain issues. (Remember the TP crisis of 2020?) But empty store shelves are just the tip of the supply chain iceberg. For businesses whose concern is the movement of goods from A to B, the Covid crisis only served to throw systemic efficiency issues into high relief. Fortunately, the past few years have also seen the ascendance of data, and of advanced technologies like AI and machine learning. These tools are making all the difference.

John Dwinell, Founder and CEO of supply chain-AI and image-recognition solution provider Siena Analytics, joins us to discuss challenges and opportunities in the supply chain space (Video 1). He’ll talk about the importance of real-time data to smart logistics and tracking, the far-reaching benefits of system visibility, and how a no-code solution can bring the esoteric art of AI right to the level of the domain users who really understand the issues at stake.

Explore how companies are taking supply chain transformations to the next level with John Dwinnell, Founder and CEO of Siena Analytics. (Source insight.tech)

What is the state of supply chain today and what the current challenges?

As e-commerce has grown, supply chain organizations are under so much pressure to get higher throughputs, better efficiency, and to be able to scale. Visibility has really been critical to understanding where the bottlenecks are and how to deal with them so that businesses can realize greater performance and precision, and can have better quality overall. Quality and visibility are really big pressure points in supply chain today.

Another common challenge in supply chain is vendor compliance—that incoming quality of product. So having a real, deep understanding of the supply chain is important—again, having the visibility to identify at scale what packages are compliant and why; and what packages are not compliant and what’s wrong; and to be able to provide that feedback to the suppliers so they can make improvements.

We started out with IoT capturing data and images in the supply chain, and along the way came the capability to bring AI and AI vision into that IoT solution, which has really helped transform visibility.

Tell us more about those recent technology advancements addressing these challenges.

IoT has really flipped the problem on its head in a lot of ways. Traditionally there’s enterprise data telling us, for example, “This is the size of a case, and so X number of cases is going to fill a trailer.” And IoT is looking at the cases and saying, “Well, actually this is the size of the case.” It’s real data flowing up. The accuracy and precision of the information is critical to being able to make those adjustments at a reasonable cost.

IoT is feeding back very precise information about the good and the bad as product comes in. And that’s critical to being able to quickly adjust to changes in volume in the supply chain, and to still have the capacity and the throughput to move those through. That real data in real time allows you to make adjustments so that you can allocate resources correctly. There’s a lot of benefit and sustainability there: obviously getting those numbers exactly right allows you to plan your supply chain more efficiently.

How are you using artificial intelligence to make efficiencies happen?

AI is a big, big factor here. The volumes are very high, and the speeds are also very high. We are looking at over 50 million cases every day. That’s just a tremendous amount of effort. AI changes the formula for that completely, because we can literally look at all six sides of every case flowing into and out of a warehouse. We can see what condition a case is in, how it’s packaged, how it’s labeled, what’s there and what’s not there. And we can answer the question of how does it meet the standards? How does it meet the supplier requirements? And doing that at scale in real time was just not possible in the past. AI and the platforms that we work on really made it possible.

“We started out with #IoT capturing #data and images in the #SupplyChain, and along the way came the capability to bring #AI and #AIVision into that IoT solution” – John Dwinell, @SienaAnalytics via @insightdottech

What are some best practices for implementing these complex technologies like AI?

It’s true that there’s a certain intimidation factor with AI. If you go back only a few years, it was kind of a dark art; you needed a real specialist. There have been a lot of advancements there.

We have a very friendly, no-code environment that takes away the mystique of the training. We’ve simplified things so that we can capture the images, label that data, train new models using the platform, and engage the customer’s domain experts to help with that themselves. They really see these models come together, which is very exciting. And we also train them to recognize that what’s really critical are small variations from one customer to another—that’s exactly what they need to see. The AI model is very adaptable to that, but you need the platform, and you need the tools to make it approachable.

And we talk a lot about the tools, but connecting the domain knowledge with the technology is also really important. So one thing I want to make sure I point out is that Siena is now part of the Peak Technologies family. Peak has really broad experience in supply chain, and really understands customers’ challenges in that realm. So it’s not just the tools, but the breadth of experience that Peak has that we can bring to the customer base to help solve their problems.

How can businesses in this space ensure the privacy and the security of their customers?

Security is really important, especially with IoT. You’re capturing data in real time right there at the edge, but it needs to be brought to the enterprise, or sometimes to the cloud. And those connections from edge to cloud or edge to enterprise need to be secure. So we work very closely with information-security teams. We leverage the technologies and platforms from partners like Intel and Red Hat to be sure that we have a very secure environment.

What are some other partnerships Siena Analytics has, and what has their value been to you?

I think IoT, as exciting as it is, is still evolving. So getting the right solutions, the right technology pulled together is extremely important to us. We work very closely with Intel, we work very closely with Red Hat. We work closely with other partners like Lenovo on the hardware. Splunk is an important partner for us in terms of analytics.

We’ve been able to watch the technology as it evolves, but also to be a part of the conversation to help guide the technology that’s needed. And I can’t thank our partners enough. They’re really critical to making this all work.

What comes next for the supply chain space?

I’ve been in this business for a long time, and I see this as the very beginning. AI in supply chain—really intelligent supply chain—is just beginning, and there are tremendous opportunities for growth. Edge-to-cloud is something else that’s also really bursting onto the scene, and it still has tremendous opportunity to grow.

Any sophisticated supply chain organization needs real-time visibility, and I think that will continue to grow, too. I think we see a lot happening in standards and collaboration as well. Companies work very closely with a vast array of suppliers, so standards are really critical to making the whole supply chain work together and work efficiently.

Are there any final thoughts or key takeaways you’d like to leave us with?

I’d say, be open to the technology. It’s moving quickly, but it can bring a lot of efficiencies. Find partners who understand supply chain and understand the technology—that’s really critical. Someone who can work closely with you on this journey and help bring in the best solution, so that you can have the most intelligent supply chain possible.

Related Content

To learn more about AI-powered supply chain logistics, listen to AI-Powered Supply Chain Logistics: With Siena Analytics and read AI Unlocks Supply Chain Logistics. For the latest innovations from Siena Analytics, follow them on LinkedIn.

 

This article was edited by Erin Noble, copy editor.

CDN Servers Bring Remote Coursework to Rural Schools

Connectivity affords access and opportunity in today’s data-centric world. But schools in rural areas often struggle to establish and maintain stable internet connections to acquire that data. As of 2022, only 46% of rural inhabitants worldwide had access to the internet—a number that dips to just 26% in landlocked developing countries. This compares to 82% access for inhabitants of urban centers.

The last couple of years have exposed the connectivity shortfall for rural school districts that were forced to transition to remote online-learning curriculums overnight. But addressing the issue wasn’t as simple as network operators flipping a switching, or issuing grants that subsidize internet access to rural citizens. In many cases the network in those areas just couldn’t support the streaming video and other rich media that teachers wanted to deliver over the internet—if the infrastructure was present at all—according to Brian Hsu, Product Line Manager for NEXCOM International, a leading supplier of network appliances.

So why is there such a disparity between the ubiquitous connectivity of cities and the sparsity you experience in the country? It comes down to cost and bandwidth. Low population density has restricted the rollout of robust connectivity infrastructure in rural areas, because the limited number of subscribers makes it difficult for internet service providers (ISPs) to recoup investments in high-bandwidth network infrastructure. This means that for internet connectivity to be accessible to all rural learners, the cost and performance of rural networking and communications equipment must change.

Today, network-equipment manufacturers are working to solve this challenge using content delivery networks (CDNs) and off-the-shelf rackmount servers.

“A CDN is very important, especially in rural areas, to being able to spread information across multiple locations around the world,” Hsu explains.

Save Cost by Scaling Down Rural Content Delivery

But before considering equipment that can alleviate the bandwidth bottleneck for rural schools, it’s important to understand how data moves from servers to clients, and where current architectures fall short (Figure 1).

Two images of how content flows with and without a content delivery network.
Figure 1. Traditional client/server architectures deliver data from its origin to users on a one-to-one basis, which requires much more bandwidth than a content delivery network (CDN) topology. (Source: ServerGuy.com)

Each of the two images below contains a cloud labeled “content,” which is positioned above a group of user icons. You can think of the content cloud as a server that delivers content, data, and/or services to the users, which each represents a client.

The orange lines in the figure represent data flows, which you’ll notice are much longer and more prevalent in the image on the left, without a CDN. That’s because it’s depicting a traditional client-server topology, where an origin server (or, the server where the data originates) provides clients with the data they request on a one-to-one basis.

The image on the right of the figure shows this same data exchange, only optimized with a CDN. A CDN is an architecture that caches data on intermediary servers located close to end users, which means that content is delivered from the origin server once, and then made accessible to multiple users on a local CDN server, which they could even connect to over a local area network (LAN).

In all, the CDN topology reduces latency and networking costs because data can be sent from the origin server once and serve multiple users, says Hsu. It can also help prevent the origin server from being overwhelmed with requests during unforeseen periods of high utilization. For network operators and users, CDNs serve the same purpose as a logistics company’s warehouse or distribution center. For instance, it’s faster and cheaper to transport goods (data) to consumers (clients) who are nearby.

“One of the key benefits of having a #CDN in your #architecture is the remarkable flexibility that it provides you in terms of scaling to meet unforeseen spikes in consumption” – Brian Hsu, @NEXCOMUSA via @insightdottech

Flexibility First for Rural CDNs

CDN servers can be built from almost any rackmount server hardware, but there are special considerations when deploying one on a network in a rural area.

  • How many users will it support?
  • On average, how much data will be cached on it?
  • How will it connect to the origin server?
  • And what network will clients use to connect to it?

These are just a few of the questions network operators and school IT departments should ask themselves before implementing a CDN in a rural area, as they will help guide the selection of CDN server hardware. And while there are many options, flexibility is the one feature CDN operators can’t afford to skip on.

That’s why NEXCOM has developed the NSA 7160, a multipurpose 2U rackmount appliance based on two 4th generation Intel® Xeon® Scalable processors. The NSA 7160 features eight PCIe Gen 5 LAN slots that provide up to 2.6 Tbps of Ethernet connectivity. The LAN expansion slots are key to helping end users configure CDNs to their exact needs, while also handling unforeseen traffic spikes, Hsu explains.

“One of the key benefits of having a CDN in your architecture is the remarkable flexibility that it provides you in terms of scaling to meet unforeseen spikes in consumption. With the NEXCOM NSA 7160, it can be customized and configured according to our customers’ needs,” says Hsu.

But just as important as that massive Ethernet throughput is the installed wireless-adapter module that can be used to add Wi-Fi 6 or 5G connectivity to the system. These WAN options combined with the aforementioned LAN configuration options provide network engineers uncapped possibilities when it comes to connecting clients to the CDN.

Aside from the performance of the latest-generation Intel Xeon processors that streamline network routing, switching, and even multimedia processing, the chipsets integrate critical functionality for a rural CDN, such as cryptographic workload acceleration courtesy of Intel® QuickAssist Technology (Intel® QAT), support for a TPM, and a RunBMC baseboard-management controller on the LAN modules that keeps data from being transferred from one failing server to another in the event of network shutdown.

And to ensure that there’s plenty of capacity for all types of coursework and curriculums, the NEXCOM NSA 7160 integrates 16 DDR5 memory DIMMs, a PCIe Gen4 x16 FHFL expansion slot with CXL 1.1 support, and up to seven NEXCOM proprietary NVMe storage adapters for additional data storage.

Class is in Session, Everywhere

At a time when more remote learning is happening than ever, CDNs play a pivotal role in democratizing education in connectivity deserts. And by lowering barriers to network access, they are ensuring that students in rural areas have access to the same opportunities as their city-dwelling counterparts.

High-throughput rackmount appliances like the NEXCOM NSA 7160 are also bringing down those barriers, as they can be transported, plugged in, and configured to the requirements of any end client or deployment environment without sacrificing latency or performance.

Now, no matter where you are, class is in session.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Modular Machine Vision for the Industrial Edge

The AI landscape is changing, fast. Too fast, in most cases, for industrial vision systems like automated quality-inspection systems and autonomous robots that will be deployed for years if not decades.

If you’re a systems integrator, OEM, or factory operator trying to get the most out of a machine vision system, how do you future-proof your platform and overcome the anxiety associated with launching a design just months or weeks before the next game-changing AI algorithm or architecture might be introduced?

To answer this question, let’s deconstruct a typical machine vision system and find out.

The Anatomy of a Machine Vision System

Historically, industrial machine vision systems consist of a camera or optical sensor, lighting to illuminate the capture area, a host PC and/or controller, and a frame grabber. The frame grabber is of particular interest, as it is a device that captures still frames at higher resolution than a camera could to simplify analysis by AI or computer vision algorithms.

The cameras or optical sensors connect directly to a frame grabber over interfaces such as CoaXPress, GigE Vision, or MIPI. The frame grabber itself is usually a slot card that plugs into a vision platform or PC and communicates with the host over PCI Express.

Besides being able to capture higher-resolution images, the benefits of a frame grabber include the ability to synchronize and trigger on multiple cameras at once, and to perform local image processing (like color correction) as soon as a still shot is captured. Not only does this eliminate the latency—and potentially the cost—of transmitting images somewhere else for preprocessing, but it also frees the host processor to run inferencing algorithms, execute corresponding control functions (like turning off a conveyor belt), and other tasks.

In some ways, this architecture is more complex than newer ones that integrate different subsystems in this chain. However, it is much more scalable, and provides a higher degree of design flexibility because the amount of image-processing performance you can achieve is only limited by the number of slots you have available in the host PC or controller.

Well, that and the amount of bandwidth you have running between the host processor and the frame grabber, that is.

Seeing 20/20 with PCIe 4.0

For machine vision systems, especially those that rely on multiple cameras and high-resolution image sensors, system bandwidth can become an issue quickly. For example, a 4MP camera requires about 24 Mbps of throughput, which on its own barely puts a dent in the roughly 1 Gbps per lane data rates provided by PCIe 3.0 interconnects.

Gen4 PCIe interfaces double the #bandwidth of their PCIe 3.0 counterparts to almost 2 Gbps per lane, essentially yielding twice as many video channels on your #MachineVision platform without any other sacrifices. @SECO_spa via @insightdottech

However, most machine vision systems accept inputs from multiple cameras and therefore ingest multiple streams, which starts eating up bandwidth quickly. Add in a GPU or FPGA acceleration card, or two for high-accuracy, low-latency AI or computer vision algorithm execution, and, between the peripherals and host processor, you’ve got a potential bandwidth bottleneck on your hands.

At this point, many industrial machine vision integrators have had to start making tradeoffs. Either you add more host CPUs to accommodate the bandwidth shortage, opt for a backplane-based system and make the acceleration cards a bigger part of your design, or select a host PC or controller with accelerators already integrated. Regardless, you’re adding significant cost, thermal-dissipation requirements, power consumption, and a host of other obstacles embedded systems engineers are all too familiar with.

Or you could opt for a platform with next-generation PCIe interfaces, such as the CALLISTO COM Express 3.1 Type 6 module from the IoT-solution developer SECO (Figure 1).

The CALLISTO COM Express 3.1 module from SECO provides a PCI Express Graphics (PEG) Gen4 x8, up to two PEG Gen4 x4, and up to 8x PCIe 3.0 x1 interfaces for demanding machine vision workloads.
Figure 1. The CALLISTO COM Express 3.1 module from SECO provides a PCI Express Graphics (PEG) Gen4 x8, up to two PEG Gen4 x4, and up to 8x PCIe 3.0 x1 interfaces for demanding machine vision workloads. (Source: SECO)

With a 13th Gen Intel® Core processor at its center, SECO CALLISTO COM Express module supports a PCI Express Graphics (PEG) Gen4 x8 interface, up to two PEG Gen4 x4 interfaces, and up to 8x PCIe 3.0 x1 interfaces, according to Maurizio Caporali, Chief Product Officer at SECO. Gen4 PCIe interfaces double the bandwidth of their PCIe 3.0 counterparts to almost 2 Gbps per lane, essentially yielding twice as many video channels on your machine vision platform without any other sacrifices.

Caporali explains that the 13th Gen Intel® Core processor brings further advantages to machine vision, including up to 14 Performance and Energy (“P” and “E”) cores, and as many as 96 Intel® Iris® Xe graphics execution units that can be leveraged on a workload-by-workload basis to optimize system performance, power consumption, and heat dissipation. All of this is available in a 15W and 45W TDP, depending on SKU, and in an industrial-grade, standard-based SECO module that measures just 95 mm x 125 mm.

To make matters simpler, the platform is compatible with the OpenVINO toolkit, which optimizes computer vision algorithms for deployment on any of the aforementioned core architectures for maximum performance. CALLISTO users may also utilize SECO’s CLEA AI-as-a-Service (AIaaS) software platform—a scalable, API-based data orchestration, device-lifecycle management, and AI model deployment edge/cloud solution that allows machine vision users to improve AI model performance over time and update their endpoints over the air.

“CLEA is fundamental to manage AI applications and models to be deployed remotely in your fleet of devices. When the customer has thousands or hundreds of devices in the field, CLEA provides the opportunity for easily scalable remote management,” Caporali says.

Modular Machine Vision for the Industrial Edge

Creating an industrial machine vision solution is a significant undertaking in time, cost, and resources. Not only does it require the assembly of niche technologies like AI, high-speed cameras, high-resolution lenses, and specialized video processors, but these complex systems must deliver maximum value over extended periods to justify the investment.

One way to safeguard against this is by modularizing your system architecture so that elements can be upgraded over time. Not only would a machine vision platform architecture built around frame grabbers allow machine vision OEMs, integrators, and users to scale their video processing and camera support as needed, the modular architecture of COM modules—which plug into a custom carrier card—allows the same for the host PC or controller itself. So, with some careful thought and consideration, you will just need to upgrade the carrier board design for CALLISTO to meet the machine vision demands of the future, all thanks to a totally modular approach.

In short: no more anxiety for machine vision engineers.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Integration Platform Orchestrates Factory Applications

Pick up your smartphone and notice the apps you have downloaded. Now if you look at another person’s device, you might notice an entirely different set. You chose the ones you did because you saw some use for them.

We take for granted that everyone can assemble the software on their smartphone as they wish. Just as we might have one app to check the weather and yet another to play today’s word game, manufacturers, too, might need a variety of applications to check inventory and another for quality control, and so on. Unfortunately, the software for these routine industry functions do not play well together.

If manufacturers want to use software from different vendors, they must take care of the integration themselves. “This is one of the biggest hurdles in smart manufacturing today,” says Matthias March, Director, Product Management, at MPDV Mikrolab GmbH, a supplier of Manufacturing Execution Systems (MES).

The smart factory needs many different solutions stitched together, but March points out that often each has its own data and its own data model. To develop a true smart factory, manufacturers need to integrate their solutions to gather the insights they need. “Specialty software addressing various needs in factories is plentiful, but companies struggle with the integration between incompatible systems,” he says.

The solution, says Bernd Berres, Principal in Product Management at MPDV, is to start with one integration platform on which to host the various software solutions, or manufacturing applications (mApps). “With such an integration platform it’s possible to bring in all solutions and they can communicate with each other [in a plug-and-play fashion] without designing multiple, separate interfaces,” Berres says.

To develop a true #SmartFactory, #manufacturers need to integrate their solutions to gather the insights they need. @MPDV_gmbh via @insightdottech

The Importance of MES for Manufacturing Applications

A well-functioning MES is the beating heart of the smart factory. It is the central system connecting all the operating elements on the shop floor. An MES connects operational data with business data from the Enterprise Resource Planning (ERP) software, Berres points out.

HYDRA X is MPDV’s take on the industry MES. It includes a wide range of functions that cover the management of orders, resources, materials, assemblies, quality, and human resources. In a nod to the industry’s need for systems that talk to one another, the company hosts the HYDRA X solution on its central Manufacturing Integration Platform (MIP). Doing so drives home the point that manufacturers need one unified foundation for their software environment. “You could say that it’s the backbone for all the IT systems that manufacturers have for their production,” March says.

Understanding that manufacturers might not need all the functionality that HYDRA X provides, MPDV has broken each into an mApp. Going a step further, the MIP hosts a whole host of third-party mApps as well. The one condition is that all apps must follow the MIP’s rules of integration.

Just as a smartphone has an operating system that sets rules for apps, the MIP does so for the manufacturing industry. This allows manufacturers to pick the specific best-of-breed solutions they need from a smorgasbord of options. “In the past, customers selected a system and were then tied to the vendor due to the investments made,” Berres says. “Thanks to the MIP, customers do not have to opt for one provider; they can combine solutions from different vendors as necessary.”

As is the case with our smartphones, each user has specific requirements and compiles their individual solution by adding required apps from the store.

An Ecosystem of Manufacturing Applications

Mixing and matching mApps also changes the way systems integrators work, Berres says. “They no longer look for a solution from one vendor that meets most of the requirements. Instead they can select the best solution for each individual case and these can be from different providers.”

For example, Systems Integrator MEGLA uses the MIP from MPDV to ensure interoperability for various software applications that it recommends to its clients. For a hypothetical plastics manufacturer, the SI can recommend a stand-alone mApp for inspection of samples, that still works in the MPDV ecosystem. The SI evaluates manufacturers on a case-by-case basis to understand their bottlenecks and what potential solutions might work for them.

Manufacturers also work directly with MPDV, picking a set of solutions that they need for specific operating environments. The German glassmaker Schott AG was looking for a universal platform for its 42 production sites around the world. The challenge was to integrate global IT and OT operations for analysis and insight while still providing custom solutions to each site. Schott now uses the MPDV integration platform and mApps from both MPDV and other partners. “In addition, they create their own as needed,” March says. “They use our business logic, and with this solution it’s very flexible to bring things together.”

No matter which applications customers choose, they are “mission-critical” for companies. “We deliver software and need a reliable system for it to run on, which is what Intel provides for us. Intel is all about high availability,” Berres says.

The Future of a Smart Factory

March expects the future of manufacturing to be driven heavily by artificial intelligence (AI) because it facilitates a “self-regulating” factory, where problems fix themselves, if they have been seen before.

MPDV is working on delivering standardized AI solutions to the marketplace so that smaller and midsize companies can also avail themselves of AI’s abilities. Companies increasingly will look for bespoke solutions for their challenges. And being able to mix and match solutions that all speak a common language will continue to be the backbone of a profitable smart factory.

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Smart Operating Rooms Cut Cost with Intelligent Box PCs

Very few places can make better use of accurate, high-fidelity, real-time information than a hospital operating room (OR). From endoscopic camera video to vital-sign monitoring to electronic medical records (EMRs) containing recent scans—OR staff needs instant access to as much relevant data as possible to make quick, life-saving decisions during surgery. And with today’s technology, integrating all of that on a single display inside the operating room shouldn’t be much of a challenge.

But unfortunately, it is. Technology advancement is limited in hospital ORs by a supplier ecosystem driven by a few large OEMs and their proprietary technologies. Since these manufacturers are responsible for the vast majority of OR equipment today—everything from endoscopic cameras and operating tables to HVAC and lighting systems—they can package highly-integrated, total OR solutions that are expensive and don’t work well with other vendors’ systems.

This vendor lock-in makes adding technology that’s commonplace in other markets resource intensive, time consuming, and cost prohibitive for hospital ORs. As a result, hospital administrators are often forced to upgrade one or two of their ORs at a time through one of the leading vendors or, in many cases, not upgrade any of them at all.

Al Moosa Specialist Hospital, a leading health center in Al-Ahsa, Saudi Arabia, faced a similar compromise when trying to implement new, entry-level, “smart operating room” infrastructure. It looked outside traditional channels for help.

A #SmartOperating room integrates #OR equipment and hospital information systems (HIS) in a single pane of glass to give surgical teams instant access to all the #data they need. iMedtac via @insightdottech

A Smart Operating Room on a Cart

The staff at Al Moosa Specialist Hospital performs operations ranging from neurosurgery and vascular surgery to plastic surgeries and burn surgeries. Its breadth of healthcare services and 12 operating rooms make for a dynamic environment, and delivering high-quality patient care means continually increasing the precision and efficiency of ORs.

A smart operating room enables this by integrating OR equipment and hospital information systems (HIS) in a single pane of glass to give surgical teams instant access to all the data they need. At the same time, information from inside the OR—such as endoscopic camera feeds, video of the OR table, vital signs, etc.—can be relayed to nurses and hospital admins for coordinating post-op care, or to remote physicians who can offer guidance or feedback over the web in real time.

The smart operating room challenge is more of a data-integration challenge than anything. With that in mind, Al Moosa Specialist Hospital turned to medical WORX, a design consultancy and systems integrator that services healthcare customers in the Middle East. It, in turn, selected the iMOR-SDB OR integration system from the Internet of Medical Things technology provider iMedtac as the foundation of its smart OR design (Video 1).

Video 1. The iMedtac iMOR-SDB is a smart operating room integration system that combines surgical, EMR, and other data in a single dashboard for OR staff. (Source: iMedtac)

The iMedtac iMOR-SDB ingests video and data from HIS and electronic equipment over APIs, then manages and routes it to displays in the OR, nurses’ stations, or wherever else it’s needed. In addition to data integration and video routing, the system can be used to record safety checks during the operation, display important reminders, stream to remote parties, and even recognize gestures.

Best of all, the software stack is packaged with the Axiomtek mBOX600, a medical-grade box PC built around 8th Generation Intel® Core processor technology that measures in at just 250 mm x 240 mm x 90 mm. In other words, small enough to be transported from OR to OR on a medical cart.

Critical for the iMOR-SDB are a half-size, 16-lane PCIe Gen 3 slot and a full-size PCIe Mini Card slot on the mBOX600, which are used to support either full HD (1080p) or ultra-HD (4K) video-capture cards. They also provide channels for video, endoscopy/microscope camera feeds, EMR or HIS information, and data from other systems. Multiple USB 3.1s, an HDMI 1.4 port, and dual DisplayPort 1.2 ports allow for quick integration with modern monitors and displays.

The entire stack is compliant with Health Level Seven (HL7) application-layer clinical data transfer, and the Fast Healthcare Interoperability Resources (FHIR) electronic healthcare data exchange standards.

“At Al Moosa Specialist Hospital, it’s on a nursing cart with a monitor; and the PC connects to the MRI, CT scanner, endoscopy machines, and internet to gather data from EMRs,” says Jason Miao, Business Director at iMedtac. “We also have a different module called the iMOR-SDB-CMS, central management system, which provides a standard FHIR- and HL7-compliant protocol that makes it easy for local installers to integrate the platform with HIS systems.”

The Smart Operating Room of the Future

One portable iMOR-SDB was deployed at Al Moosa Specialist Hospital in December of 2022, and it has already been so successful that administrators plan to deploy one in each of the facility’s 11 other operating rooms. These will be installed in a more permanent, wall-mount configuration, and integrate with the hospital’s EMR/HIS infrastructure via the central management system mentioned previously.

But regardless of how it’s installed, the iMOR-SDB is a standalone unit that doesn’t need to be physically integrated with other equipment beyond plugging in a monitor. Compared to the alternative, this saves time, effort, and cost for hospitals looking to operate as efficiently as possible, and to deliver the highest level of care via operating rooms of the future.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.