Christina Cardoza: Hello, and welcome to “insight.tech Talk,” where we explore the latest IoT, AI, edge, and network-technology trends and innovations. As always, I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re going to be discussing AI in healthcare with our good friend Ian Fogg from CCS Insight and Alex Flores from Intel. Hey, guys, thanks for joining us.
Alex Flores: Thank you for having me.
Christina Cardoza: Before we get started, we’d love to learn a little bit more about your and the companies that you work at. So, Alex, I’ll start with you. What can you tell us about yourself and what you do at Intel?
Alex Flores: Sure. So, my name is Alex Flores. I’m the Director of the Health Solutions Vertical at Intel. At Intel, we’re not clinicians or former clinicians; we’re engineers and we’re technologists, and we’re driven by the intersection of technology and healthcare. And what we do is we work with the ecosystem to look at how some of the biggest challenges can be solved with technology.
Christina Cardoza: Great. Looking forward to diving into that in just a bit. But before we get there, Ian, you’ve been a friend and guest of the show numerous times now, but for those of our listeners who have not listened to those other great conversations we’ve had on the “insight.tech Talk,” what can you tell us about yourself and CCS Insight?
Ian Fogg: So, CCS Insight, we’re essentially a technology-research firm, an industry-analyst firm. I’m a Research Director here. Our role, really working with insight.tech, is to interview key players in the market, research some of the changes—like, for example, AI arriving in healthcare—and then communicate those and write those up in a form that’s easy to pick up and use.
Christina Cardoza: Of course. And on that note, CCS Insight has a research paper coming out on this very topic, which is why we have you joining us today. So, what can you tell us about that report, and if there were any particular findings in this space that stood out to you?
Ian Fogg: I think just how extensive AI usage in healthcare already is. It’s something that I think is recently, in the last couple of years, really arrived on the popular mindset, in the mainstream media. But it’s clear that in healthcare it is well embedded, it’s widely used, and it’s growing across all categories. I mean, it was one of the striking statistics that I think we found in the research, the report, that as of August 2024 there were 950 AI-enabled medical devices that have been approved by the FDA across categories. That’s an enormous number, and of course it’s growing all the time.
I think the other thing that’s really striking about what’s happening here is how much it’s moving from diagnostics and imaging and research into other parts of the healthcare ecosystem. So, organizational tasks, room management, tying together disparate systems, and also things like multimodal input—so, transcribing conversations that would otherwise never be recorded. There’s just this enormous, burgeoning range of activities really right away across the sector.
Christina Cardoza: It certainly is interesting, and you bring up a good point: It’s not only the devices and not only necessarily directly related to the healthcare, but there are things in hospitals or in organizations and offices and buildings that we can add AI within the healthcare space to really improve operations and inefficiencies. And, like you said, a lot of this has been ongoing; we’re only hearing a bit about it in the media. I think that’s some of the best implementation of the technology, is when it’s happening. But as a consumer or as a user you don’t see it happening right up front.
Alex, I’m curious, based on some of the findings Ian just mentioned, is that what you’re seeing in the space from an Intel and engineering perspective?
Alex Flores: Absolutely. We are seeing AI being rapidly adopted in healthcare, and it’s really going across the board—whether it’s a patient registering or checking in, for example, there’s AI analytics going on in the background trying to—whether it’s pre-filling in your forms and so forth or gathering data from past visits and so forth. Or in the actual clinical workflow—whether a patient is receiving some type of healthcare, whether the doctor, for example, is transcribing notes and so forth. So it goes across multiple workflows.
And I think what’s interesting is a lot of it is really behind the scenes, which is where we want it to be, because ultimately what it’s doing behind the scenes is impacting the clinician’s workflow, allowing them to do their job faster, better, easier, so they can spend more time with the patient.
Christina Cardoza: That’s great. And those benefits are a lot of things that I’ve seen writing for insight.tech and for different industries—manufacturing, retail—trying to get those benefits from AI. I think healthcare is an interesting space; it presents a lot of interesting challenges and complexities just because you’re dealing with a different environment, regulations, patient-sensitive data.
So, can you talk a little bit about how the healthcare space is able to adopt these technologies in a safe, secure, efficient way?
Alex Flores: Yeah. Before I jump into that, I think there are a couple of data points I wanted to mention. I think what’s unique about healthcare is—a lot of people don’t realize this—that from a data perspective, roughly about one third of the world’s data is being driven out of healthcare. And then there’s evidence out there that maybe roughly about 5% of that data is actually turned into actionable insights. So there’s this tremendous opportunity to use AI to really kind of unlock those insights from that data.
The second thing is you layer in some of the macro trends that are happening globally across healthcare. Those include an aging population; they also include—people are getting sicker, there’s more and more people that are being diagnosed with multiple chronic diseases. And then you also layer the fact that there’s a global shortage of clinicians—both doctors and nurses, for example.
So with that, the need for AI becomes even more important, and the rapid adoption of that becomes more important, because what it’s doing is it’s allowing clinicians to increase their efficiencies—whether it’s the workflows, whether it’s being able to triage patients, and so forth. So for me, that’s where AI is going to be crucial in order to continue to help alleviate some of those issues and really allow our clinicians the benefits to do that. And then, if it’s implemented correctly, you don’t have to worry about some of the regulatory. Again, it’s really there to benefit the clinicians so that they can allow themselves to really focus on the patient and patient outcomes.
Christina Cardoza: That’s an interesting perspective on it. I want to go back a little bit to the data points that you were talking about—all the data that’s coming from the healthcare sector. And then I imagine with devices coming online, or more devices being AI-enabled, that’s just giving us even more data, which I’m sure AI is helpful for, being able to sort through some of that data.
But Ian, I’m curious if you can touch on this growing amount of healthcare data that we have and how AI and AI at the edge is really going to come into play—if there’s anything from that report you found in this space.
Ian Fogg: I think there’s a few things here. I think, just touching on something that Alex mentioned there, I think what AI is doing in many areas is not replacing clinicians: It’s making the clinicians more efficient; it’s taking load off them. And you can see that in the way that data is being analyzed. You can see data volumes going up enormously.
One study I think was talking about that the size of a CT scan could be 250 megabytes for an abdomen; a staggering one gigabyte for the heart; digital pathology could be even greater, if you’re looking at cells of 2.5—those are enormous, enormous amounts of data for a single scan. If you compare that with a smartphone camera, that might be a five-megabyte image.
And one of the other things that’s striking, though, is you can’t use the same techniques to compress medical-imaging data that you can use for a photograph, because the tools used to compress a photograph are lousy tools; they lose data, and they lose it based on what the human eye doesn’t perceive. They’re perceptual-compression algorithms. You can’t do that for medical; you have to look at the full image, because you need to have all that detail so you can spot irregularities in the scan. And so that just makes the challenge even harder.
Then you’ve got—well, you’ve got this enormous amount of data. AI has two slightly competing implications there. One is that it means you can analyze that data quicker, so you can have an efficiency of speed benefit. But one of the companies we talked about framed it the other way around and said, “Look, actually, because you’ve got this AI tool that can analyze more data, what you can actually do is analyze a greater part of a biopsy.” Which means that if there’s just a few cells that are irregular in a cancer scan, you are more like to spot them because you’ve scanned a bigger sample. And that means your scan is more accurate, which means you’ll identify problems and healthcare issues earlier, and you’ll save costs and load on the healthcare system kind of down the line. So there’s some interesting dynamics there that are striking.
The other piece is that when you want that to be a very responsive experience for the clinician, if you can do it at the edge rather than the cloud, you can make that faster and you can—it’s easier to make it private because the data can stay closer to the patient, closer to where it’s being captured.
And that’s a trend we’ve seen in many areas with AI, where we’ve seen things start in the cloud, and then as edge devices get more capable you move those things onto the edge devices for that performance benefit. So there’s all kinds of interesting dynamics here when you start looking at the data. Do you make it a fast experience? Or do you use AI to analyze a greater amount of the data to improve the quality of what you’re doing?
Alex Flores: Ian, you bring up some really interesting and valid points too. I think one of the things that I did want to emphasize is what you were talking about—speed, quality, and so forth—and I think that’s where a lot of new compute technology comes into play, as well, that’s working in the background.
So, for example, latency. When a radiologist brings up an image, if they’re triaging something, they want to be able to see that image in real time or near real time, because every second counts, obviously, for a lot of these healthcare providers. Technologies like compression and decompression, again, these are all working in the background. But as we work with a lot of the different ecosystem players, a lot of the leading medical-device manufacturers—that’s what Intel is doing kind of in the background, is really looking at how we can optimize their technology, their workflows, their algorithms, and so forth, so it gives the clinician that real-time or near-real-time experience that they need. And if it’s done correctly, it’s seamless, so they can go about their job as quickly as possible.
Christina Cardoza: And I’m sure when you’re thinking of cloud versus edge, it depends on the device or depends on the outcome that you’re trying to get. Do you need the real-time metrics and insights to have it on the edge? Or is it quality and being able to go through all the data and that being on the cloud?
So I know, Ian, you were talking about different approaches to dealing with things especially in healthcare—and so we have the edge, we have the cloud—but are there any other strategies that healthcare providers or people in the healthcare space or even patients can implement to bring AI into the healthcare industry and any best practices there?
Ian Fogg: So, two things jump out. One is just this use sense, that it isn’t just about imaging and scans and that computer-vision piece. We’ve seen a lot of examples now of AI being used to make the organizational aspects, from healthcare to the hospital, more efficient. Things like operating theaters are incredibly costly assets, and if you can schedule the cleaning and sanitization teams efficiently, you can reduce downtime between operations. And that came up in one of the interviews we did for the report.
The other thing I think that came up very strongly was what’s called federated learning—this idea that when you have a machine learning model, you want to maintain the privacy, but you want to use a diverse and broad data set to improve the quality of the AI model. And a federated learning approach means you can have potentially multiple hospitals or multiple healthcare facilities contributing to the model, but where the data that’s used to improve the model remains within the facility.
And that’s something which enables the AI model to become much more capable, much more sophisticated, but still works within the environment you need around privacy and management. We’ve seen that in some other areas, but it’s particularly relevant in the healthcare space.
Christina Cardoza: It’s interesting, as you’re talking about these approaches and strategies and the benefits that the healthcare space gets with this, I’m brought back to what Alex was saying in the beginning: how you aren’t healthcare providers; you’re engineers. And then we have these healthcare providers implementing this technology.
So, Alex, from an engineering perspective, how can hospitals, healthcare providers deploy AI? What challenges or opportunities do they face in this space and working with an engineer like Intel to make it happen?
Alex Flores: Yeah, I think, oftentimes, when we’re working with customers in the ecosystem, really it starts with giving them choice, giving them options when they’re deploying AI. As Ian mentioned, a lot of organizations are deploying in the cloud, and that’s great; it’s a tried method. Innovation is happening all the time. The cloud is obviously been in use for decades.
There’s other organizations that are kind of taking a hybrid approach, right? They want the benefits of the cloud, but they also want the benefits of being able to access data real time or near real time at the edge. And then there’s other customers that are looking at an edge-only approach, where they’re concerned, as Ian was saying, maybe it’s cost, maybe it’s security reasons and privacy, and so forth.
So for Intel, what we really want to do is walk them through the different options, and specifically when we get to the edge. What makes the edge so attractive is that access to that data, that access to patient data at a real time or near real time so clinicians can take advantage of it—especially if they’re trying to triage a situation for a patient. So having that access to that data, being able to run the correct analytics for that data at that moment, becomes very crucial.
And then at that point they can determine, “Okay, do I need to save this data? Does it need to be stored in the cloud? Can I send it to maybe a local data center?” and so forth. So for us it’s really showing the customers the ecosystems, the choices, some of the benefits, and then seeing what’s best for their particular implementation.
Christina Cardoza: To paint a picture for the audience here, do you have any examples or case studies you can share with us? And you don’t have to name names, but anyone that came to you, you gave them these options, what the options were, what they chose, and what the result was of that?
Alex Flores: Yeah, we have many different options, which makes my job really exciting, to be able to see some of these technologies being implemented. And I’ve actually had the benefit of seeing some of them actually in play. One that comes to mind is patient positioning. So, for example, when a patient is getting a scan and they lay on the table, what happened before in the past is oftentimes the patient wasn’t positioned correctly. So then that would cause the technician to have to redo the scans. And then obviously for various different reasons now it’s taking longer, because the patient has to get rescanned, for example. The patient may be exposed to additional radiation that they shouldn’t have been exposed to and so forth.
So, having AI-based algorithms that help the technology position the patient correctly before the scan—that’s one example. Second one is around accurate contouring of organs at risk. One of the major bottlenecks for radiation therapy is doing this contouring of these organs. And often based on the image quality and so forth there can be a lot of error in that. So having AI-based contouring is another area that really can help the clinicians speed up their process; it really can help automate some of these different tasks and so forth.
A last example that I have is on ultrasound, for example, and this is a real story. So, my wife, she had a procedure a couple years ago, and I remember we were driving back and I asked her how the procedure went. And she started describing a situation where she said, “The anesthesiologist came in and they used an ultrasound machine to identify the vein where the anesthesia would be administered.” And I got really excited because I said, “Oh, I know exactly what algorithm that was, because we were working with the ultrasound manufacturer to optimize that.” Essentially, that algorithm was used to help identify the vein, to avoid sitting there having the clinician do multiple insertions of a needle before finding the vein.
So those are just three examples; the list goes on and on. That’s why, again, I get so excited about my job, is seeing that practicality of the technology being implemented with a solution.
Christina Cardoza: That’s great, seeing it out in the wild, too, and having a personal experience with some of this technology that you’re working on. Definitely something that I could have used or that I can’t wait to see out in the real world. I’ve had three children, and my second one, they poked and prodded my arm—it was black and blue—they couldn’t get an IV line. So that my third one, I was like, “I don’t even want one, don’t even put it near me.” So, it’s really—I can’t wait to see some of this stuff be more widely adopted.
And we’re talking about diagnostics and imaging and other areas, but like you said, there’s so many options and so many different places AI and healthcare could go. You guys mentioned a couple times dictating notes for doctors and things like that. So, Ian, I’m curious, from a research perspective, where is this space going? Do you have any future-looking ideas on how AI usage is going to continue to evolve in healthcare?
Ian Fogg: I think this could evolve in many areas. I mean, that ultrasound example, I think it’s a fascinating example because ultrasound’s a very cost-effective, very accessible type of scanning. And what you are doing with that is you are making a tool that’s been around for decades more effective, and that that is augmenting an existing tool. It’s a fascinating example.
I think some things we’re clearly going to see, we’re going to see cloud-based AI continue, but we’re going to see increasing use of AI on the edge, too, for that responsiveness piece. The other I think we’ll see is we’ve seen these very large AI models, we’ll see more smaller-focused models come to market for a particular task or use. And they become more portable, they become even easier to put onto edge devices. And we’ve seen that in other fields outside of healthcare too.
I think we’ll see this multimodal element. So, multimodal means audio-based, video-based, still-image-based, and text-based. And that means both a way of interacting with the model but also what the model is able to understand and perceive about the world. So it might be able to perceive—use a camera to identify if there are queues forming or people forming in certain parts of the hospital.
The transcription piece is interesting. That means you are capturing information that may otherwise never have been captured, maybe a patient-doctor conversation. And then you can summarize that conversation so you can add things into the medical record that maybe aren’t being captured at the moment, but also make it accessible and surfaceable and findable later.
I think there are other things beyond that. AI is very good at correlating trends across different data sets. This could be used in a public health context more. AI models can’t do causation, so when you find those correlations, you’ll still need to go and push them in front of a researcher, a clinician, to validate that it’s a real thing, not just one of those random correlations, but it will probably uncover underlying causes for conditions, new ways of approaching healthcare that we haven’t thought about before. And then there’s just so many uses. You can look at this really right the way across, everywhere that technology’s being used in a hospital facility, I think.
Christina Cardoza: Yeah, so many opportunities. And in this conversation we stuck a little bit to the devices and the implementation and the data aspect of it, but I’m sure we can go off in many different directions. We’re talking about the AI models, the size of the A models, what they can do, but at risk of opening up a can of worms—because I’m sure we’ve only scratched the surface, and we can keep going on and on—I’m going to end the conversation here. But what I’d love to do is throw it back to each of you for any final thoughts or key takeaways you want to leave our listeners with today as they prepare for the AI evolution in healthcare or what they can expect to see. So, Ian, I’ll start with you on this one.
Ian Fogg: I think one of the big things here is we’ve seen a lot of hype around internet-based LLMs. I would say, don’t be discouraged by the quality of those things like ChatGPT or Gemini or Claude. I mean, when you start looking at these medical AI models, they’re typically trained on pre-validated data sets, not the internet, so the accuracy level is much greater.
I think additionally we’ve seen some things come through where you can use one AI model to validate the output of another AI model, and that can raise the quality of the output too. So that quality piece that you might see when you are playing with stuff online isn’t applicable here; this is a different kind of space. And in some cases people are using in-house, open source–based models, so they have greater control ownership of it too. So, don’t be discouraged by what you might see in other areas—on your phone or on your computer or on the internet. This is a different space. The quality here is much, much higher.
Christina Cardoza: Awesome. And, Ian, I’ll make sure to provide a link out to that report, so that those listening, they can learn and dig into some of the things that we were talking about even further.
Alex, before we go, any final thoughts or key takeaways you want to leave our listeners with?
Alex Flores: Yeah. I think Ian mentioned a really good thing, and that’s the miniaturization of AI. And essentially we’re going to continue to see that pattern, and we’re going to learn what is the right AI at the right time at the right device.
And the other thing that I wanted to also mention is when you’re doing AI at the edge on the device, power becomes a really important feature. Because if you think about it, it’s kind of a snowball effect. More power, you need the bigger fans, you’re going to need the bigger device, the new form factor, and so forth. Oftentimes you don’t need that; you can run the right amount of AI at the edge without needing to redesign or reconfigure your device. There’s new technology, new compute that allows you to do that.
So, as we continue to evolve, as more and more artificial intelligence goes to the edge, it’s going to be easier and easier to run at the edge.
Christina Cardoza: And I think deploy AI at the edge also. Like you said, these devices and technology, it’s getting smaller. It’s amazing what you can do with the infrastructure you already have and without a lot of hardware, a lot of equipment.
I can’t wait to see where else the space goes—other innovations and technologies from Intel. So, I just want to thank you both again for joining us today and for talking about this topic. Thank you to our listeners also for coming in and listening. Until next time, this has been “insight.tech Talk.”
The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.
This transcript was edited by Erin Noble, copy editor.