AI, IoT, and Edge Computing: The Future of Smart Retail

Retailers have always valued data—collected, analyzed, and implemented it to keep abreast of their customers’ wants and needs. But in times past, they had to wait until the end of the quarter or the end of the year to crunch those numbers and see which way the wind was blowing. It’s pretty clear that consumer trends move much too fast nowadays for that kind of cycle to be useful.

AI, computer vision, and edge computing—these technologies have stepped in to make data processing much more efficient. And that’s not all: They are also involved in retail solutions that can affect everything from controlling inventory to turning mundane jobs into more valuable and fulfilling positions.

We explore some of these solutions in the world of smart retail with Silvia Kuo, Business Development Director for the EMEA region at computer-hardware enterprise company ASUS. She guides us through the challenges retailers face, the importance of IoT partnerships to the solutions, and the role that technologies like AI play in changing the way we shop (Video 1).

Video 1. Silvia Kuo from ASUS talks about the latest smart retail opportunities and technologies. (Source: insight.tech)

What challenges do businesses face in getting to smart retail?

Smart retail has been around for a while, and everyone’s been talking about it. Of course, as with any change, there’s also a bit of resistance. There’s a lot of innovation and adoption happening, but I think the industry has probably been looking for a direction. I think that a few years from now, retail might look a bit different from how we know it today.

For example, logistics—there’s a lot that we can do with logistics. Sometimes today the process is still very manual, things like tracking the stock when it comes in. So instead of sending someone to count it, you can have a system that can just do it for you very quickly. You can also have an alerting system that lets you restock quickly without having to send someone to see if the shelves are empty.

Or space optimization. Now with computer vision we can look at the whole store and see a heat map of which areas people visit the most—because of the layout or maybe because of the brand you put there. And with this knowledge you can, for example, move the best brands during the high season in Q4, or you can adjust the rent or the fees for a vendor.

One of the main challenges retailers are dealing with is a lack of personnel—people not wanting to do these kinds of operational, routine jobs. Retailers are quite desperate to find solutions that are not too costly and that will create jobs that are more meaningful. So instead of filling up the shelves, people will be managing systems that will fill up the shelves for them. Or instead of asking, “What do I need to buy for the next quarter?” and completing an Excel form, they can analyze it through a computer and make a final decision or just review it.

This kind of management of systems is where we see that the industry is going. A lot of people feel that AI is going to be a threat and replace all of our jobs. But I think that it’s a shift, and it can improve people’s lives.

What is the role of advanced technologies like AI in this transformation?

In retail the things we do always fall into two categories: Either it is gathering data to analyze it for making data-driven decisions, or it is automating process. AI, computer vision, edge computing—they are all technologies behind something; it’s more of a horizontal. For example, AI can help with the engagement of customers, because nowadays we see that stores are not really a place just to purchase but more of a customer-experience or brand-experience space.

We have seen other things like digital signage targeting an audience. You can instantly show a group of people something that is targeted to them, or you can even be interactive and ask them questions. This is what AI is doing now: interacting with customers, analyzing a situation in real time behind the scenes, and giving feedback. In the past you had data, but you didn’t always know what to do with it. Now you can understand what the behavior of your audience is in one district in the country as opposed to another, and adjust, say, the stock according to that analysis.

Computer vision is also a horizontal technology—for example, for recycling. A lot of retail grocery stores have recycling machines, and the technology can determine what kind of empty product is being put into the machine. In some cases, in Europe, they give money back, so you can spend it in the store.

How do ASUS solutions avoid siloing?

ASUS has a comprehensive, holistic type of approach, because that’s the nature of a computer company, right? We are the brain, let’s say, and behind all of this innovation is where we run. So when we are looking into a problem, we have to look at the solution as a whole and then see what components to put into that solution to solve the problem.

What type of infrastructure or investment is necessary to start?

We always try to use what is there, like the cameras that are already doing security. But we adjust them to use that same video stream for analysis on the edge computer afterward. So we reuse these kinds of things. Of course there is some investment involved, because it’s a technology that wasn’t there before—some sensors or some cameras that have certain different angles or some signage to communicate with the customer—and these are investments that have to be made.

A new technology such as AI needs a lot of computers, but many times stores have their own little data centers. We can collect the data on the edge, in the stores, and pull it back to these little data centers where all the analysis and decision-making can happen. As much as we can, we try to reuse.

Can you share some use cases of businesses leveraging these solutions?

I have two examples: one that is more about problem-solving and another one that is more of an enhancement. The first one was this grocery retailer that was looking for an automated way to alert them of empty shelves—especially in the fresh-produce area, because it was a very manual process there. They also wanted to combine that with a way to adjust pricing throughout the day depending on the performance of each product.

We used computer vision to first identify the produce that was there. Throughout the day at certain intervals it would take different images and do some analysis to determine the level of stock. Then this would create alerts in the central system, and also all of the operators would see those alerts and know: “I have to refill the apples and the oranges now.”

Based on the same AI technology of recognition of the product, we also automated the pricing. So if, for example, the apples were not selling very well on one particular day and at 4:00 they wanted to start clearing them, they could automatically change the e-tags below the apples, changing the information and the price.

The second example is more about enhancement and improving understanding of the customer. This was a technology that we developed together with a software vendor of ours in France. It involved putting a sensor and a camera out with certain products to understand how people interacted with them.

This solution was looking at things like: How long do customers stand in front of the brand? Which products do they pick up? How long do they interact with the product? Did they buy it or put it back? So there was a lot of data accumulated there, and the vendor got a lot of interest from the brands themselves, because brands want to understand, when they launch a new product, how people like it.

What is the value to ASUS of its technology partnerships?

Intel has been a long-term partner to ASUS. This relationship was really crucial when the IoT department was created. Intel is also very supportive to its partners, for example, engaging us with the end customers. Many of the customers in the examples I gave earlier were actually people that Intel introduced us to. And when Intel is developing something for AI, it is using OpenVINO to implement these new features, and ASUS is asked to be a tester of those features. In many cases, we are one of the first to try out new Intel technology, so we’re able to implement it in a lot of the new products that we launch.

We also market the features and solutions out there. There are lots of choices of technology, and we offer different kinds, but when we see a company like Intel coming into this space and trying to optimize and democratize it—because it’s not just about selling more computers but about how you make it accessible to people—we want to support that.

Regarding the partner program that we run ourselves at ASUS, I always say that in IoT it’s very difficult to do something on your own. There are so many components, right? We see camera makers; they are optimizing their software to make AI more possible or trying to put chips on their cameras so that it’s easier for the edge computer to analyze more data. Everybody understands this: It’s hard to do everything alone, so you need partnerships.

Where do you think smart retail is going from here?

It’s a broad question, but I will try to guess. I think one thing is that, as I said before, we will see a lot more automation of operations and also people having more meaningful roles, more interesting jobs. Another thing is a lot of interactive devices and kiosks, and AI will help with this and with the problem of having enough staff to attend to all of the guests. We are seeing a lot of voice AI that is very accurate and that even has accents and slang.

Also, I think that a lot of retail spaces will be become showrooms, really; they won’t just be places to buy things. And I would even dare to say that in some of these spaces you would just place orders and have them shipped to your home; you won’t even have to wait for the clerk. They will be more of a showroom, a tryout room.

Sometimes in Asia there’s this obsession for making things faster and more seamless, right? And I think that will be something that will expand across the world, making experiences more seamless and having a nice experience instead of having to wait.

Related Content

To learn more about the evolution of retail, listen to Smart Retail’s End-to-End Transformation and read POC Shows What’s In Store for Retail Analytics. For the latest innovations from ASUS, follow them on Twitter/X at @asus and LinkedIn.

 

This article was edited by Erin Noble, copy editor.

Transforming Healthcare Operations and Patient Care with AI

When your organization delivers patient care, every process behind the scenes must be optimized to keep the focus where it belongs—on the patient. But does your infrastructure support that goal effectively?

AI has become central to driving healthcare efficiencies and improving operations—both in patient-facing care and within the complexities of medical infrastructure. As highlighted in a recent report from CCS Insight, this trend continues to grow, reshaping how healthcare systems operate at every level.

We recently caught up with Ian Fogg, Research Director of Network Innovation from the research firm CCS Insight, to talk about some of his recent findings in the healthcare space. He was joined by Alex Flores, Director of the Global Health Solutions Vertical at Intel, to discuss the impact of AI on the healthcare space, how data figures into the equation—and what we can figure out from the data—and where healthcare stands on the question of cloud vs. edge (Video 1).

Video 1. Industry thought leaders talk about the role of AI in healthcare. (Source: insight.tech)

What can you tell us about this latest CCS Insight report and its findings?

Ian Fogg: One finding, I think, is just how extensive AI usage in healthcare already is. In the last couple of years, it has really arrived in the popular mindset and mainstream media, but it’s clear that in healthcare it was already well embedded and widely used. As of August 2024, the FDA had approved 950 AI-enabled medical devices across all categories. That’s an enormous number, and of course it’s growing all the time.

I think the other thing that’s really striking is how much AI is moving from diagnostics and imaging and research into other parts of the healthcare ecosystem—organizational tasks, room management, the tying together of disparate systems, multimodal-input transcription. There’s just this enormous, burgeoning range of activities right across the sector.

And it’s not only devices and not only directly related to the healthcare; there are things in hospitals or offices or buildings to improve operations and efficiencies, but as a consumer you don’t see it happening right up front.

What are you seeing in the healthcare space from an engineering perspective?

Alex Flores: As Ian mentioned, we are absolutely seeing AI being rapidly adopted in healthcare; a lot of new compute technology is coming into play. And, again, what’s interesting is that a lot of that is analytics going on behind the scenes—which is where we want it to be. But it does impact the clinician’s workflow, hopefully allowing them to do their job faster, better, and more easily, so that they can spend more time with the patient.

Or think of the importance of latency. When a radiologist in a triaging situation brings up an image, they want to be able to see that image in real time, or near-real time, because every second counts. Technologies like compression and decompression are all there working in the background.

And Intel is also there in the background, really looking at how we can optimize medical-device technology—its workflows, its algorithms—so that the clinician can have that real-time or near-real-time experience. And if it’s done correctly, it’s seamless.

Another thing that’s unique about healthcare is that roughly one-third of the world’s data is coming from this space, and maybe 5 percent of it is actually turned into actionable insights. So there’s this tremendous opportunity to use AI to really unlock those insights from that data.

Then you layer in some of the macro trends that are happening in healthcare, including an aging population and the fact that people are getting sicker and being diagnosed with multiple chronic diseases. And then you have the fact that there’s a global shortage of clinicians.

“#AI becomes more important in order to allow clinicians to increase their efficiencies so that they can really focus on the patient and on patient outcomes.” – Alex Flores @intel via @insightdottech

Because of that, AI becomes even more important, and the rapid adoption of that AI becomes more important in order to allow clinicians to increase their efficiencies so that they can really focus on the patient and on patient outcomes.

Can you expand on the data aspect of healthcare?

Ian Fogg: Alex said that AI is making clinicians more efficient, and you can see that in the way that data is being analyzed; you can see data volumes going up enormously. One study I remember said that the size of a CT scan could be 250 megabytes for an abdomen, a staggering one gigabyte for the heart, and that digital pathology could be even greater. Those are enormous, enormous amounts of data for a single scan. Compare that with a smartphone camera that might have a 5-megabyte image.

And one of the other things that’s striking is that you can’t use the same techniques to compress medical-imaging data that you can use for a photograph, because the tools used to compress a photograph are lousy tools. They’re perceptual-compression algorithms. You can’t use those for medical. You have to look at the full image, because you need to have all that detail so you can spot irregularities in the scan. So that just makes the challenge even harder.

So how does AI, and AI at the edge, come into play?

Ian Fogg: AI has two slightly competing implications. One is that it means you can analyze the data more quickly and have an efficiency benefit. But, on the other hand, one of the companies we talked about in the report framed it the other way around and said, “Actually, because you’ve got this AI tool that can analyze more data, what you can do is analyze a greater part of a biopsy.” That means that if there are just a few cells in a cancer scan that are irregular, you are more likely to spot them because you’ve scanned a bigger sample.

That means your scan is more accurate, which means you’ll identify problems and healthcare issues earlier, which means you’ll save costs and reduce the load on the healthcare system down the line. So there are some interesting dynamics there that are striking.

The other piece is that when the clinician needs a very responsive experience, if you can do it at the edge rather than the cloud, it can be faster. It’s also easier to make it private, because the data can stay closer to where it’s being captured, closer to the patient. And that’s a trend we’ve seen in many areas with AI: Things start in the cloud, and then as edge devices get more capable, things move onto edge devices to get that performance benefit.

What’s the best way to think about implementing AI in the healthcare industry?

Ian Fogg: Two things jump out. One, as I said before, is that it isn’t just about imaging and scans and computer-vision; we’ve seen a lot of examples of AI being used to make the organizational aspects more efficient. Operating theaters are incredibly costly assets, and if you can schedule their cleaning and sanitization efficiently, you can reduce downtime between operations.

The other thing is what’s called federated learning. When you have a machine learning model, you want a broad and diverse data set to improve the quality of that model, but you also want to maintain privacy. A federated learning approach means that you can potentially have multiple hospitals or healthcare facilities contributing to the model, making the model more capable and more sophisticated, but the data that’s used to improve it remains within the facility.

How do you approach the deployment question from an engineering perspective?

Alex Flores: It starts with giving our customers options. As Ian mentioned, a lot of organizations are deploying in the cloud. Other organizations are taking a hybrid approach: They want the benefits of the cloud, but they also want to be able to access data in real time or near-real time at the edge. And then there are other customers that are looking at an edge-only approach, maybe for reasons of cost or reasons of security and privacy. It’s about showing the customers the ecosystems, the choices, the benefits, and then seeing what’s best for their particular implementation.

Can you share any real-world examples?

Alex Flores: One that comes to mind is patient positioning during a scan. Oftentimes the patient isn’t positioned correctly on the table, so the technician has to redo the scan. It takes longer, and the patient may be exposed to additional radiation. So AI-based algorithms can help with positioning the patient correctly before the scan—that’s one example.

A second one is around one of the major bottlenecks for radiation therapy, and that is contouring delineation of radiation targets or of nearby organs that might be at risk. Based on the image quality, there can be a lot of error in that, so having AI-based contouring can help the clinicians improve their planning and their process.

A last example I have is on ultrasound, and this is a personal story. My wife had a medical procedure a couple of years ago, and she told me afterward that the anesthesiologist had used an ultrasound machine to identify the vein where the anesthesia would be administered. And I got really excited. I said, “I know exactly what algorithm that was!” because we were working with the ultrasound manufacturer to optimize that algorithm.

Seeing the practicality of the technology being implemented with a solution is a really great aspect of my job.

How will AI in healthcare evolve from here?

Ian Fogg: That ultrasound example is a fascinating one, because it’s about augmenting an existing tool. Ultrasound is a very cost-effective, very accessible type of scanning that’s been around for decades, and you are making it more effective.

Also, we’re clearly going to see cloud-based AI continue, but we’re going to see increasing use of AI on the edge, too, for that responsiveness piece. Another thing I think we’ll see is more small AI models come to market for a particular use or task. And as they become more portable, they’ll become even easier to put onto edge devices. We’ve seen that in other fields outside of healthcare. 

Alex Flores: I do want to mention that when you’re doing AI at the edge on a device, power becomes a really important feature. If you think about it, it’s kind of a snowball effect: With more power, you need bigger fans; so you’re going to need a bigger device, a new form factor, and so forth. But oftentimes you don’t need that; you can run the right amount of AI at the edge without needing to redesign or reconfigure your device. There’s new technology, new compute that allows you to do that. So it’s going to be easier and easier to run at the edge.

Ian Fogg: I also think we’ll see a multimodal-input element—audio based, video based, still-image based, and text based. And that means both a way of interacting with the model but also what the model is able to understand and perceive about the world. So it might be able to use a camera to identify if there are queues forming in certain parts of a hospital.

Lastly, AI is very good at correlating trends across different data sets. This could be used in a public health context more. AI models can’t do causation, so when you find those correlations, you’ll still need to push them in front of a researcher or a clinician for validation. But it will probably uncover underlying causes for conditions and new ways of approaching healthcare that we haven’t thought about before.

Related Content

To learn more about healthcare at the edge, listen to AI at the Edge: Take the Leap in Healthcare. For the latest innovations from CCS Insight, follow it on X/Twitter at @ccsinsight and LinkedIn. For the latest innovations from Intel, follow it on X/Twitter at @intel and LinkedIn.

 

This article was edited by Erin Noble, copy editor.

AI Innovation: Digital Automation with Cognitive Services

Artificial intelligence transforms industries, enabling businesses to boost efficiency, reduce costs, and improve reliability. In manufacturing, where profit margins are tight and unplanned downtime can cost hundreds of thousands of dollars per day, AI-driven automation is becoming a game changer.

Take the paper industry—press operators once had to manually monitor wet line, a water-related issue that could halt production. But with the help of computer vision and cameras, they can integrate AI-powered visual inspection to automate the process.

Automation like this is feasible thanks to cognitive services such as those from AI solution provider byteLAKE, which empowers manufacturers to deliver faster and more accurate results while preventing costly disruptions. Now, other industries reap the same benefits, as byteLAKE expands its Cognitive Services to provide real-time insights and next-generation AI automation across multiple sectors.

“Previously, we only offered AI for cameras, focusing on visual inspection,” says Marcin Rojek, Co-Founder of byteLAKE. “Now, we’ve expanded into more sophisticated use cases—far beyond cameras counting products on conveyor belts, checking and reading labels, identifying scratches, and analyzing colors. Our Cognitive Services remain a collection of industrial AI models, but we’ve broadened their scope to process a wider range of data types. We’ve enhanced their capabilities to analyze not only videos, images, and audio files but also data from diverse sources, such as IoT sensors and manufacturing management or error reporting systems.”

Expanding AI Capabilities Beyond Manufacturing

One area where byteLAKE sees significant opportunity is with energy utilities and power providers. As these companies face growing pressure to improve efficiency and sustainability, they look for AI capabilities that can precisely predict demand across different locations and user needs—resulting in enhanced operational efficiency, reduced waste, and optimized resource management.

“When you run a heating utility company, you want to do two things,” explains Rojek. “First, you want to optimize operations to avoid wasting energy. At the same time, you need to ensure that the required energy is delivered in alignment with demand. This requires accurate demand forecasting.”

“#AI is becoming like a calculator—you no longer have to do all the calculations on paper. Instead, you can focus on more strategic and creative tasks.” – Marcin Rojek, @byteLAKEGlobal via @insightdottech

By leveraging AI automation, byteLAKE analyzes real-time data alongside historical trends, such as past consumption behaviors and weather forecasts, to automatically generate accurate demand predictions.

Additionally, byteLAKE’s Cognitive Services provide predictive maintenance across infrastructure, helping customers identify potential failures before they occur and optimize costs.

“AI is becoming like a calculator—you no longer have to do all the calculations on paper. Instead, you can focus on more strategic and creative tasks,” says Rojek.

The power industry presents similar opportunities to heating and utility providers. Electricity providers must accurately predict and manage demand while maintaining grid reliability. Through predictive maintenance and intelligent load balancing, AI can help enhance energy conservation efforts and improve overall efficiency.

The food production industry is another area where byteLAKE’s Cognitive Services are making an impact. “We can identify key challenges in the food industry, and with AI analyzing data across the production line, it helps calibrate machinery, improve quality control, and reduce waste,” Rojek explains. “By leveraging predictive maintenance through data analytics, our solutions not only reduce costs but also optimize production, enhance operations, and drive revenue growth.”

Making AI Capabilities a Reality

byteLAKE can seamlessly integrate into these various different industries because it works with the software that businesses already rely on—such as Manufacturing Execution Systems (MES) and Computerized Maintenance Management Systems (CMMS). Customers using these systems can easily incorporate byteLAKE’s Cognitive Services or adopt pre-integrated solutions. Additionally, byteLAKE is compatible with industrial standards like SCADA and heating utility software, ensuring broad applicability across multiple sectors.

And because all deployments are on-premises, AI and cognitive services function independently without transmitting any data to external or public cloud services. This ensures that all data remains securely within a company’s infrastructure, processed locally on its hardware.

Another key factor in byteLAKE’s success is its partnership with Intel. By leveraging Intel’s latest technologies—including OpenVINO for cross-platform AI deployment, VNNI as part of Intel® Core processors, and Intel® AMX as part of Intel® Xeon®—byteLAKE optimizes its AI solutions for peak performance. These integrations enable the software to scale effortlessly from edge devices to enterprise-level HPC clusters. Customers benefit from GPU-level performance on CPUs, minimizing the need for costly hardware investments while maintaining superior AI processing capabilities. This scalability allows businesses to start with small AI implementations and expand seamlessly as their operational needs grow.

Pushing the Boundaries of AI Innovation

Going forward, byteLAKE plans to continuously adapt its Cognitive Services to meet the evolving needs of industries such as manufacturing, smart cities, energy, and food production. Looking ahead, the company is exploring Generative AI (GenAI) to make AI interactions more intuitive and human-like.

“In the future, Cognitive Services will feature a human-like interface powered by Generative AI,” says Rojek. “Our goal is to allow users to interact with their factories or energy infrastructures in a ChatGPT-like manner—asking questions like, ‘How is my plant doing?’ and receiving tailored insights instantly. This will revolutionize how industries engage with their systems, making AI not just a data processor but an intelligent assistant.”

Beyond advancing AI-human interaction, byteLAKE focuses on ensuring seamless integration with existing business software. Future deployments will make it even easier to embed Cognitive Services into industrial workflows, eliminating the need for standalone solutions.

For now, byteLAKE remains laser-focused on delivering AI-driven models tailored to industries such as papermaking, manufacturing, food production, and power/heating utilities. These solutions transform raw data into actionable insights, helping businesses optimize energy use, improve production efficiency, and maintain high-quality standards.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Power the Future: The Path to Smarter, Greener Grids

We often take for granted the energy grid quietly powering our homes, businesses, and increasingly electrified lives. It’s infrastructure we just expect to work and always be there. But the current grid is aging and becoming increasingly vulnerable to failure. That’s why there is an ecosystem of innovators working tirelessly behind the scenes to develop the smart grid—a dynamic, digitized, and highly efficient energy system capable of meeting today’s demands and tomorrow’s challenges.

“The grid is not smart enough today and requires much more digitization to make it more efficient—but also flexible and decarbonized,” says Valerie Layan, Head of Power & Grid Europe at Schneider Electric, a global specialist in energy management and automation.

The push for a smarter power grid is fueled by two key trends: rapid growth of renewable energy and massive electrification of society.

Wind and solar power transform how energy is generated, offering clean, sustainable alternatives to fossil fuels. But these renewable sources are inherently variable—solar panels generate electricity only when the sun shines, and wind turbines depend on favorable weather, for example. Managing this variability while maintaining a steady energy supply demands a grid both flexible and intelligent.

Electrification is another major driver. From electric vehicles (EVs) to heat pumps, society increasingly turns to electricity to power not only homes and industries but also transportation systems. In fact, global electrification levels are projected to rise from around 23% of energy consumption today to as much as 50% or more by 2050, depending on the region, according to Philippe Vié, Global Group Lead for Energy, Utilities & Chemicals at Capgemini, a global leader in technology services. This shift places enormous pressure on the existing grid infrastructure, never designed for such dynamic and complex energy flows.

Challenges of Building a Smarter Grid

Creating a smart energy grid requires balancing electricity generation and consumption while maintaining reliability. Unlike other resources, electricity is difficult to store, and demand must match supply in real time.

Listen to “insight.tech Talk’s” episode: Partnerships Power the Smart Grid of the Future

Historically, the grid relied on centralized power plants to generate electricity in a predictable, one-directional flow to consumers. Today, renewable energy generation is distributed across homes, commercial buildings, and industrial facilities. Consumers are becoming “prosumers,” generating their own electricity and feeding surplus energy back into the grid.

This shift requires the grid to adapt to bidirectional energy flows—a complex task that demands standardization, collaboration, and cutting-edge technologies.

And all of this must happen in a cohesive, collaborative manner across the energy sector. Having a lack of standardized protocols and roadmaps will cause a significant challenge. Common standards ensure interoperability between diverse systems and technologies. Governments, regulators, and industry stakeholders must work together to establish consistent guidelines that facilitate global scalability.

Organizations like the Edge for Smart Secondary Substation (E4S) Alliance are stepping up to create a standards-based, secure architecture for utilities. “The only way we can achieve what we need as an ecosystem is through modular and scalable architecture,” says Paul O’Shaughnessy, Sector Head of Energy & Utilities at Advantech, a hardware manufacturer. This approach allows for incremental upgrades, ensuring systems stay current while avoiding complete overhauls.

Technologies Enabling the Smart Grid

In addition to collaboration and standards, a suite of advanced technologies drives the transition to a smarter energy grid.

    • Advanced Distribution Management Systems (ADMS): These systems optimize grid operations by managing outages, balancing load and generation, and reducing technical losses. ADMS helps utilities maximize existing infrastructure while integrating renewable energy.
    • Digital twins: Virtual models of physical systems simulate grid performance, train operators, and identify potential issues. This proactive approach minimizes downtime and enhances resilience.
    • Virtualization: Virtualizing grid infrastructure enables utilities to process vast amounts of edge data, such as information from substations and smart devices. Virtualization also supports rapid responses to changing conditions.
    • Energy storage solutions: Emerging technologies like battery systems and hydrogen storage stabilize the grid and provide flexibility. Demand-response programs incentivize consumers to adjust energy use during peak times, further supporting grid stability.
    • Edge AI: Smart meters and sensors at the grid’s edge enhance real-time visibility and control, helping utilities predict and manage energy flows effectively.

Together, these technologies enhance grid resilience, optimize energy distribution, and support integration of renewables.

Creating a smart #EnergyGrid requires balancing electricity generation and consumption while maintaining reliability. via @insightdotech

The Road Ahead for Renewable Energy and Smart Grid Development

The path to a smart energy grid is paved with collaboration and innovation. Partnerships among hardware manufacturers, software providers, and utilities are essential for developing interoperable solutions. Standardization efforts, like those led by global alliances, play a crucial role in ensuring the entire ecosystem progresses cohesively.

Modernizing the grid must remain affordable and equitable. The cost of these upgrades will ultimately impact consumers, making efficiency and cost-effectiveness paramount. Governments and regulators can provide incentives for renewable energy and grid investments, while fostering policies that promote long-term sustainability.

Building a Smarter, Greener Future

The transition from a traditional energy grid to a smart power grid is vital for a sustainable future. While challenges exist, the opportunities to create a resilient, efficient, and secure energy system are immense. Through digitization, automation, and collaboration, the grid can successfully modernize to power homes and businesses.

“The grid of the future will be more than a collection of wires and substations,” says Layan. “It will be an intelligent, interconnected system that enables us to meet the challenges of climate change, energy security, and economic growth.”

By embracing these advancements, the smart grid will power a greener, more connected world—ensuring a sustainable energy future for generations to come.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Smart Retail’s End-to-End Transformation

Smart retail has been a buzzword for years, but we’re now at a critical inflection point. The industry grapples with rapid innovation, shifting customer expectations and mounting operational pressures. Retailers don’t look just for incremental upgrades—they need a complete digital transformation that integrates automation, AI-driven insights, and real-time data to stay competitive.

In this podcast episode, we explore the future of smart retail and how emerging technologies drive the industry forward. From AI-powered customer behavior analytics and personalized digital signage to voice AI for seamless wayfinding and transactions, we examine how innovation reshapes both the store and the customer experience.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guest: ASUS

Our guest this episode is Silvia Kuo, Director of Business Development and Partnerships at ASUS, a computer hardware enterprise. At ASUS, Silvia not only explores new opportunities and technologies but also focuses on co-developing solutions with various partners. Before joining ASUS, she served as a Sales Manager for EMEA Technology Partners at Gorilla Technology Group.

Podcast Topics

Silvia answers our questions about:

  • 1:18 – Challenges smart retail still has to overcome
  • 4:19 – Enhancing employee experience with AI
  • 6:59 – Using technology to make smart retail a reality
  • 10:04 – Integrating technology into existing infrastructure
  • 12:33 – Real-world use cases and lessons learned
  • 15:58 – Leveraging industry expertise
  • 19:16 – Smart retail’s ongoing evolution
  • 21:40 – Final thoughts and key takeaways

Related Content

To learn more about the evolution of retail, read POC Shows What’s In Store for Retail Analytics. For the latest innovations from ASUS, follow them on Twitter/X at @asus and LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to “insight.tech Talk,” where we explore the latest IoT, AI, edge, and network-technology trends and innovations. As always, I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re exploring the world of smart retail with Silvia Kuo from ASUS. Hey Silvia, thanks for joining us.

Silvia Kuo: Hello. Nice to be here. Thank you, Christina.

Christina Cardoza: Before we jump into the conversation, what can you tell us about yourself and what you do at ASUS?

Silvia Kuo: Yeah. So, I’m the Business Development Director at ASUS for the EMEA region, and I basically—my job, what I do is I look for new projects to engage in. I also look into what technology we should include in those projects. And the other thing I do is also I run the partner program, which is essentially looking for different partners to create solutions together and offer them to the market.

Christina Cardoza: Great. Excited to dive into some of that. I know ASUS has a lot of solutions and a lot of ways you can help retailers or partners in this space. But I wanted to start off the conversation, because we’re talking about smart retail, and I feel like this is a key word that has been floating around for a couple of years now. So, when we say smart retail and when we talk about smart retail, what do we mean? Are retailers there yet? Or, what are the challenges that they’re still facing to get to smart retail?

Silvia Kuo: I think that smart retail has been around for a while and everyone’s talking about it, but I think that there is this—of course, as any change, right? there’s a bit of resistance, but there is also this imminent change that you can feel. And I think that the industry has been probably trying to look for a direction—that’s my feeling—but there is a lot of innovation that we see, and they are adopting it. And I think that a few years from now retail might look a bit different from what we know it, how we know it, today.

Christina Cardoza: One thing I’ve noticed with smart retail, and I’ve noticed ASUS stands out a little bit in this space, is that when retailers do their transformation, it often happens in silos or one at a time. They do self-checkout or they do inventory. What I love about ASUS is the company has a range of products from end-to-end to really go from inventory to behavioral analytics to space optimization.

So, what can you tell us about some of the smart retail solutions out there and how that can help really transform a business from front end to back end rather than just one area at a time?

Silvia Kuo: Right. And I think that’s the reason why we have this more comprehensive, if you will, or holistic type of approach is really because of the nature of a computer company, right? So, we are the brain, let’s say, behind all of this innovation, is where we, it, runs. So, because of that, we have had to look into the different solution as a whole. So when we offer something, it’s more of an end-to-end. It’s, we are looking into a problem and then seeing what components to put into a solution to solve this problem. And that is why it’s this holistic approach.

And I think that in retail, some of the things that we are doing today fall—I think it always falls into two categories: Either it is gathering data in order to allow to analyze this data and further make data-driven decisions, or is automating process. And I think that in retail it is very important, because we’re seeing that one of the main challenges is that there’s a lack of personnel all around the world, right? And people not wanting to do this kind of operational, routine type of job.

So what was happening is that the retailers are quite desperate in trying to find a solution to make this less costly and also to make people be able to do a job that is more meaningful for them. So, for example, instead of filling up the shelves, they’ll be managing systems that will fill up the shelves for them. Or in instead of looking and completing Excel forms to say, “Okay, what do I need to buy for the next quarter?” it’ll be analyzing this through a computer and making a decision, a final decision, or reviewing that. So this kind of management of systems is how we see where the industry is going.

Christina Cardoza: And I love how you put that, because I feel like a lot of times when technology like this comes into play, a lot of people are worried that it’s going to replace their jobs or what the technology is going to do, but it sounds like it’s really taking away the mundane tasks and making their role more valuable and making their position within a retail store more valuable.

Silvia Kuo: Of course, of course. And I—and we see that overall. Like we see that now we want people to be more engaged and to have more meaningful—because I think it comes along with the introspection that we as human beings have, we are becoming more and more aware of our psychology and things like meaningfulness in life. And I think the technology allows us to explore that side even more.

So, like you say, it’s—a lot of people feel that it’s a threat, especially AI; it’s going to be a threat, replace all of all of our jobs. I think that it’s a shift. It is like everything in technology, and even the industrial revolution time a long time ago—people were scared, but actually what happened is that it improved production, it improved people’s lives.

So, for example, when I see logistics, there’s a lot that we can do still in logistics. Sometimes it’s still today the process is very manual. So, things like tracking the stock when it comes in: How much do I have? Instead of sending someone to count it, then I have a system that can just naturally do it quickly for me. And then I can also have an alerting system that lets me restock the shop very quickly without me having to send someone to see if the shop is empty. So that’s just for logistics.

If we go into, for example, the space optimization, now with computer vision we can look at the whole store, and we can have a heat map of which are the areas that people visit the most, because of the layout or maybe because of the type of brand that I put there. And with all this knowledge then I can—for example, during Q4, when it’s the high season for certain stores, I can put the best brands. Or I can adjust the rent—let’s say as a retailer—the rent or the fees or what I offer to my vendor according to this.

So there’s lots of things. Or, for example, in when we see queues, long queues, and that is really something that we want to avoid, right? We can look at this, and instead of having three checkout points, we can say, “Okay, now there’s more people, so let’s open three more.” So these kind of solutions are what we are trying to help out with.

Christina Cardoza: Absolutely. And I just think about some of the more advanced technology or solutions in my everyday life, it just becomes norm. I don’t even think about things that I’m using. So I can’t wait to see until that becomes more mainstream and more adoptive across retail stores. I don’t think workers will really even think of it; it is just going to become a new way of working.

But of course there’s a lot of complex technology and things that go into making that happen. Some of the other keywords out there: “AI,” “computer vision,” and “edge computing.” You touched on computer vision a little bit, but I’m curious, what are the roles of these advanced technologies? How are they playing in these solutions and making retail really smart retail?

Silvia Kuo: Yeah, I think these are all technologies—so, AI, computer vision, edge computing, they’re all technologies behind something, so it’s more of a horizontal. So when I say this is, for example, AI can help the engagement of customers, because nowadays we see that stores are not really a place just to purchase but more of a customer-experience, a brand-experience space.

In that aspect we have seen things like digital signage that is targeting the audience. When I see that, for example, there’s a group of people that is around this age, they’re male or female, I can instantly show them something that is targeted to them or even do interactive—kind of ask them questions. This is what AI is doing now, is analyzing the situation real time and giving feedback and interacting with customers.

And something more behind the scenes that AI is doing, it’s, for example, analyzing data. So when we—AI works based on data, so if we have more data—sometimes in the past we had data, but we didn’t know what to do with it. Now what AI is doing is analyzing this so that over time, over years and months, I can understand what the behavior of my audience is in this area as opposed to another district in the country, and adjust the, say, the stock according to this.

Computer vision is very interesting, because also it’s a horizontal technology where I can apply it, for example, for recycling. Now we see a lot of retail grocery stores that have the recycling machines, and they’re determining what kind of empty product we are putting into the machine and giving, for example, in some cases, in Europe, they give you money back, right? So you can spend it in the store.

Another thing is security. We’re doing security with this or doing checkout—a checkout when someone doesn’t have a barcode or it’s a product without barcode is not walking back to the produce section or having to weigh it. It can use that, use computer vision, and recognize what the product is.

And each computer I think is—it’s something that allows all this technology to have less latency, because if we had to move all of this analysis back to the cloud or back to a data center—first, it’ll take a long time and consumes a lot of power and also data. So, without having to go back to the central network, we are doing this compute in a distributed way, even when there’s no internet—for example, it’s down for a few hours. I can even do this, I continue to use it, and then when I’m back on the network I can also update new features, etcetera.

Christina Cardoza: It’s amazing when you think about the data aspect that you just mentioned. I feel like, a couple of years ago, all that data that was being collected, you’d have to wait every quarter or every at the end of the year to really analyze that data and to be able to make changes. And by that point a lot of opportunity came and went. And so with this technology—AI helping with the data, computer vision, edge computing—you’re able to now make these changes in real time when it actually matters. And that has just been improving the business even more. So that’s great to see.

But I’m curious, how can businesses successfully integrate some of these technologies we’ve been talking about into their infrastructure, when we’re talking about inventory and end-to-end self-checkouts, smart queues—what type of infrastructure or investments are necessary to start implementing some of this technology and these solutions?

Silvia Kuo: Right. Of course there is some investment involved, because it’s a technology that wasn’t there before. But as much as we can we always try to use what is there. So, for example, when you mentioned the cameras, computer vision uses cameras a lot. We always try to use the cameras that they already have doing security, but we adjust them to at the same time that they’re doing security, we’re using the same video stream to analyze it behind on the on edge computer, for example, and do some analysis afterwards. So we are trying to reuse whatever infrastructure is there already.

The compute—usually there is, if it’s a new technology such as AI or computer vision, it needs a lot of computers. So there might be a need of putting in a computer. But many times many stores have their own little data centers, if you will. And what we can do is collect the data on the edge, let’s say, in the stores, pull it back into these little data centers, so all of the big data analysis is happening there or is happening—whatever decision-making or statistics is happening afterwards—there.

So we reuse these kinds of things. We don’t really have to put a lot of hardware into it. But yes, a lot of the newer technology needs, for example, some sensors or some cameras that have certain angles that were not there, these—or some signage, for example, in order to communicate with the customer—these are investments that are, that have to be done, but as much as we can, we try to reuse.

Christina Cardoza: That’s great. I always love the camera example, because so many businesses, they want to make sure they’re future-proofing any investments that they do make. And when they were purchasing these security cameras decades ago, I don’t know if they imagined that they would be being used in these capacities. So it’s amazing to see just how much technology has involved and how much we can leverage some of that existing infrastructure to really make these changes across the store.

I wanted to shift over now. We’ve been talking a lot about retail stores in general and different solutions that can be applied, but do you have any customer examples or use cases you can share of businesses that have actually leveraged these solutions we’ve been talking about? What problems they were experiencing and how the company came in and helped them.

Silvia Kuo: Yes, yes. I think there’s one that is more of problem-solving, like you mentioned, and the other one is more of an enhancement: I have two examples. So the first one was a—this retailer, this grocery retailer—was looking for an automated way to alert them of empty shelves, especially in the fresh produce area, because it was very manual, and they also wanted to combine that with the pricing. So they wanted to adjust the pricing throughout the day depending on the performance of that product that day.

So what we did was we used computer vision to first identify what produce was on the shelf, and throughout the day it would take different images and analysis at a certain interval in order to determine the level of stock. So this would be, you’ll create an alert in their system, the central system, but also for all of the operators, so they will see, “Oh, okay, I have to refill the apples and the oranges now.” And without having to do this walking by.

And another thing that we did based on the same AI technology of recognition of the product, we also automated their pricing. So now, for example, the apples were not selling very well that day, and we at four o’clock we wanted to start clearing them. So we would automatically change the e-tags below the apples and change the information and the price. So, and we did this already in several of their stores, and they are planning to roll it out. And so that’s a good example.

The second example is more about enhancement and improving understanding of the customer. This was a technology that we developed together with a software vendor of ours in France. What they did is they put a sensor and a camera in order to understand how people interact with the products. A lot of brands were interested in this. They also featured in one of the biggest trade shows for retail in NRF New York.

What they were doing is they were looking at how long do customers stand in front of the brand, which product they pick up, whether they look at it, how long they interact with the product, whether they take it, did they buy it, or whether they put it back. So there was a lot of data that was accumulated there, and a lot of—they got a lot of interest from brands themselves, because brands want to understand, when we launch a new product, how do people like it? Or, it’s not such a big deal, right? So the brands themselves were trying to contact these people to use the technology, but also the furniture makers for retail stores were very interested in offering this to the brands.

Christina Cardoza: I love those examples, especially the one about the fresh produce and apples, because it showcases how this is not only helping businesses and their efficiency and their operations, but then also the customer experience is improving as well: They’re making sure that their produce is always fresh and that items are always there when they’re going to look for it. So that’s great to see.

You mentioned in the beginning how ASUS works as a brain and works with partners to get some of these solutions in stores and to make some of these things happen. So I’m curious about those types of partnerships, especially I should mention insight.tech and the “insight.tech Talk” are sponsored by Intel. But I’m curious what the value of your Intel partnership and technology is in making some of this happen, and additionally any other partners that you’re working with to make this happen.

Silvia Kuo: Right. I think that Intel has been a long-term partner to ASUS. Even right at the beginning, when we were just doing consumer computers, laptops, and because of this long-term relationship it has really been crucial when we—this department, the IoT department—was created. Because I think one of the advantages that ASUS has, and even throughout difficult periods where stock or supply chain was an issue for ASUS, it wasn’t so much an of an issue because of this relationship and partnership. In many cases we are one of the first people that try out the new technology from Intel, so we’re able to implement them in a lot of the new products that we launch.

But at the same time I think that Intel is a very good partner in terms of when they’re developing something like AI, they are doing OpenVINO to implement these new features, they ask ASUS to be sort of the testers of this. And we also market it out there, not just because of the partnership. I think it’s because we see across the board there’s lots of choices of technology and we offer different kinds, but we are also able to—when we see something like Intel coming into this space and trying to optimize and democratize it, because it’s not just about selling more computers, it’s about how do you make this accessible to people?

We see Intel doing that a lot, and they’re also very supportive to partners. They will, for example, engage us with the end customers. Many of the examples I gave you just now were actually people from Intel that introduced us and said, “Look, they tried this out, why don’t you see if that works for you?” So there’s a lot of good rapport with Intel in this aspect, and I think that’s what makes a good relationship.

And I think regarding the other question, the other part of the question that you made regarding the partner program that we run, I always say that in IoT it’s very difficult to do something on your own, because how you have so many components, right? Even with the camera makers, we see camera makers, they are optimizing their software in order to make AI more possible, or trying to put the chip on their camera so that it’s easier for the edge computer to analyze more data.

So we’re seeing that there’s a very good collaboration, and everybody understands this—that without a partnership it’s hard to do everything alone. So we are, we created this partner core program as a way to—we do a lot of marketing and a lot of validation, but apart from this it really is a space where you can exchange projects and you introduce each other to different customers and projects.

Christina Cardoza: Yeah, that’s great. That’s an ongoing theme at insight.tech, this idea of better together and working with other partners, that no one company can do it all themselves. I think that would be very difficult, and the solutions that retailers or end users would get would be very expensive and take a lot longer time to update or to really work through or do more advancements. So it’s great to hear companies like Intel and ASUS working together with other partners to make this possible.

This sounds like even though we’ve been talking about smart retail for a couple of years now, we still have a long way to go, and we’re only at the beginning of it, and not all of these things are being implemented yet. So I’m curious, how do you anticipate this space to continue to evolve? Technology gets more advanced, you make more partnerships—where do you think smart retail is going?

Silvia Kuo: Right. It’s a broad question and, yeah, I wish I knew more, but I will try to guess. I think that one of the things is that we will see a lot more automation of the operations, like we mentioned first when we started. Second is that, as I said, we will see people having more meaningful roles, more interesting jobs. Let’s say that it will be managing systems, right? And also I think that a lot of the brands and a lot of the retail spaces will be become showrooms really; it won’t be just for, you know, to buy things. And I would even dare to say that to some of the spaces you would just place orders and it’ll ship to your home; you don’t even have to wait for the clerk to go and get this right size for you; it’ll be more of a showroom, a tryout room.

Other things we are seeing is a lot of interactive devices and kiosks, and AI will help with this. It will help with the problem of having enough staff to attend to all of the guests, so you will be able to interact with the screens, with devices, in an easier way. We are seeing a lot of voice AI, for example, that is very accurate, even has accents and slang. So, a lot of that coming up. Also I think the—what you see sometimes in Asia, there’s this obsession for making things faster and more seamless, right? And I think that will be something that will expand across the world. It will be making the experience more seamless and waiting for less time and having a nice experience instead of this, you know, waiting time.

Christina Cardoza: Yeah, that’s one thing that customers with technology, if implemented, don’t take well to is when the technology doesn’t work or when they have to wait for the technology. But we’ll have to come back in a couple of years and re-listen to this podcast to see if any of these predictions were right. But I can definitely, definitely guarantee more meaningful roles for workers is definitely going to be something that comes out of this, and more meaningful customer interactions and customer experiences because of this. So that’s great to hear.

We are running out of time, so before we go I just wanted to throw it back to you one last time, if there’s any key takeaways or final thoughts you want to leave our listeners with today.

Silvia Kuo: Yeah, I think I want to do a sort of a call to anyone that is a solution provider that thinks that they would benefit from partnering with a ASUS—feel free to reach out.

Christina Cardoza: Absolutely. And for those who want to learn more about what ASUS is doing in this space or how smart retail is going to continue to evolve, I encourage you to visit insight.tech, where we continue to cover ASUS and other partners in this space.

So thank you, Silvia, for joining. It’s been a great conversation. Thanks to our listeners for tuning in today. Until next time, this has been “insight.tech Talk.”

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Dell, Intel Collaboration Advances Life Science Technology

AI in life sciences technology is nothing new. It’s widely used in fields like genomics, drug discovery, and personalized medicine. But now AI and machine learning are accelerating innovation across almost every aspect of the industry—in ways we could not have imagined.

Take, for example, the short development cycles and ability to respond to new variants of mRNA-based Covid-19 vaccines. With millions of lives saved, it’s clear that AI-powered data analytics in medical research and development has an exceptionally positive impact on society.

As AI takes over a huge chunk of workloads, we see an exponential growth of medical research technology data, which is being processed, analyzed, shared, and secured—from the network edge to the core cloud. But the computing infrastructure required to power these massive workloads is complex. And while domain experts like research scientists and computational biologists don’t want to worry about what lies beneath their data, IT teams must.

Mindy Cancila, Vice President, Strategic Business Development of Dell Technologies, explains in a recent blog post: “The complexity of managing diverse workloads and data across a variety of environments has become daunting as organizations scale their efforts. As IT decision makers consider new hardware, software, and devices today, they’re less concerned with product specs, speeds, and feeds, and more with how these investments will drive ROI and deliver true business outcomes.”

Recognizing this shift, Intel and Dell have partnered to simplify deployment of edge-to-cloud hardware, software, and tools—empowering life sciences organizations to stay ahead of the curve and achieve their goals with greater efficiency and confidence.

“We’re enabling innovation with scalable performance, unwavering reliability, comprehensive support, and future-ready infrastructure,” says Alex Long, Head of Life Sciences Sales Strategy at Dell. “For example, I work with pharma companies deploying new instrumentation for genetics research. We help them plan for how to transition from handling gigabytes of data in a month to petabytes in a week.”

“We’re enabling #innovation with scalable performance, unwavering reliability, comprehensive support, and future-ready infrastructure.” – Alex Long, @DellTech via @insightdottech

Rightsized Life Science Technology Infrastructure

But as infrastructure demands grow, IT organizations continually face doing more on a budget without sacrificing pace of innovation. They are not just managing rapid growth of data creation and sharing but also the computing infrastructure to support it.

Intel and Dell deliver rightsized technology designed for life sciences use cases that scales for the future without huge investments in new hardware and software.

One example of how organizations can manage costs is re-evaluating their assumption that heavy AI workflows must be developed and run on expensive GPUs. In the vast majority of use cases, Intel processor-powered Dell systems have computational capacity that reduces the need for GPUs both at the edge and in the data center.

“About 80% of the customers I’ve talked to have oversized the volume of GPUs they need and the processing power they’re actually going to use,” says Long. “There are a lot of workflows that should go directly to the CPU, which translates to lower CapEx and OpEx.”

Open Source: Fundamental to Healthcare Solution Development

Another challenge that GPUs can introduce is future vendor lock-in. Leveraging open source software like PyTorch and TensorFlow prevents that problem while also controlling development costs.

And it’s not just cost. The life sciences R&D community has a long history in using open source software, which plays a role in code validation, collaboration, flexibility, and investment protection.

Intel’s commitment to open source includes software tools and hardware optimizations that enhance the use of PyTorch, TensorFlow, and other software across its product line.

As innovation accelerates, life sciences R&D, clinical trials, and more become increasingly collaborative—achieving new advancements faster. This drives not just rapid growth in AI-generated data, but also an increase in federated learning across different domains, causing a massive need to secure data in new ways.

For example, sharing research data must be done in a confidential way. Dell and Intel take a layered approach to protecting data at the edge and in the cloud. Hardware- and software-enhanced technology, like Intel® Threat Detection Technology, Intel® Software Guard Extensions, and Intel® QuickAssist Technology, delivers a foundation to secure data on and off premises.

The Future of Medical Research Technology

Scientists, developers, and IT will continue to face complex infrastructure requirements well into the future. “Focus on the domain,” says Long. “With combined decades of experience in life sciences, you can leave much of the system design to Intel and Dell.”

This gives computation biologists freedom to focus on their approach in gaining the most from AI and to handle the onslaught of data to maximum advantage. Even with expertise in AI-driven analytics, they must consider if and how to retrain models, leverage existing ones, or build something brand new.

“When you start working with these organizations, it becomes very clear that we’re just scratching the surface of what they need,” says Long. “When you talk to them about how they’re approaching this, that’s the ultimate end story. The Dell and Intel partnership offers computing platforms and infrastructure the life sciences segment requires to handle a massive increase in data workloads—from the edge to the data center.”

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Accelerate Efficiency with Industrial Digital Transformation

In today’s hyperconnected and competitive world, manufacturers face unprecedented challenges. To thrive, they must rethink their operations, embrace advanced technologies, and chart a course toward smarter, greener factories. A range of emerging innovations, including AI, computer vision, and edge computing, helps manufacturers optimize production, streamline operations, and achieve sustainability goals.

“Clients are asking, ‘How is AI going to affect what we’re doing in the plant?’ ‘How will it play in our overall industrial operations?’” says Mike Trojecki, Senior Director of AI Practice at global technology solution provider World Wide Technology (WWT). “Businesses need to start somewhere and those that don’t embrace emerging technologies will quickly fall behind.”

Industrial Digital Transformation Era

Adoption of those advanced technologies is known as the industrial digital transformation era. It involves adopting AI and computer vision to enhance operational agility, reduce costs, and improve security. By integrating these tools, manufacturers not only improve revenue but also position themselves to better meet the demands of modern supply chains.

But that transformation can’t be achieved in isolation. WWT works with partners to overcome industrial digital transformation hurdles and fill in critical gaps. By analyzing a partner’s or client’s overall process, WWT can map out the necessary technologies and solutions to address key challenges.

By integrating these tools, #manufacturers not only improve revenue but also position themselves to better meet the demands of modern #SupplyChains. @Intel via @insightdottech

“We can work with clients to find and identify the quantitative and qualitative digital transformation challenges that are impacting their businesses. We look at the biggest opportunities to mitigate risk, decrease costs, increase margins, and grow revenue,” Trojecki explains.

Predictive Maintenance with Multisensory AI

WWT also helps clients modernize manufacturing and cut costs by implementing predictive maintenance solutions across the factory floor. Predictive maintenance is critical for minimizing downtime and preventing costly equipment failures. While traditionally it has been a manual domain of factory line workers with years of experience, today’s technologies enable the process to be more automated.

WWT recently assisted a machine manufacturer in minimizing equipment downtime using sensor data and computer vision. Doing so, the company was able to analyze machine health and take proactive measures before any issues occurred.

Similarly, AI-based video analytics solution provider iOmniscient uses a multisensory AI approach to predict potential issues, even with limited training data. By integrating data from multiple inputs, manufacturers can gain deeper insights into equipment performance, enabling timely interventions and reducing overall maintenance costs.

Another approach to predictive maintenance is digital twin technology, which creates a virtual replica of machine operations to alert users of any potential problems. For example, technology company Bosch GmbH and Prescient Devices, a leader in data engineering and IoT solutions, implemented this approach through the Bosch Digital Twin integrated asset performance management system. This system monitors machines and provides timely data for better decision-making.

“If machines go down unexpectedly, they can take multiple days to fix. With predictive AI analytics, managers can fix them during preplanned maintenance windows, so the production line would never go down,” says Prescient Devices CEO Andy Wang.

As AI reliance grows, so does the need for edge hardware capable of processing larger data loads. Manufacturers require compact, high-performance devices that can operate in harsh environments and support Time-Sensitive Networking (TSN). Companies like AAEON provide such solutions with their COM-RAPC6 and NanCOM-RAP computer-on-modules, which combine compact form factors with exceptional power efficiency.

Collaboration at the Edge: Bridging OT and IT

Beyond technology, addressing cultural challenges and operational silos between OT and IT teams is a key step for achieving true industrial digital transformations. Companies like Red Hat, provider of enterprise open source software solutions, and Intel have demonstrated how their industrial edge platform can foster collaboration, break down barriers, and enable greater productivity.

“Businesses are a formation of people, and how those people operate the business often emulates system design. If you have poor collaboration with your IT counterparts or still experience siloed friction in the relationship, it will manifest in your systems—whether it’s a lack of resiliency or the inability to stay on schedule,” says Kelly Switt, Senior Director and Global Head of Intelligent Edge Business Development at Red Hat.

Driving Sustainability with AI and Advanced Technologies

As manufacturers adopt these technologies, they can pave the way for sustainability—a business imperative. By going green, manufacturers can reduce environmental impact, meet growing regulatory requirements, and align to customer demands for eco-friendly products. Real-time monitoring solutions enable manufacturers to identify inefficiencies, reduce energy usage, and extend equipment life.

NEXCOM is at the forefront of this transformation, helping manufacturers achieve sustainability goals with intelligent edge solutions. These solutions empower manufacturers to optimize energy usage, monitor emissions, and implement predictive maintenance strategies that can significantly lower carbon emissions. Such innovations not only address immediate environmental goals but also pave the way for a smarter, more connected future.

The Future of Smart Manufacturing

By embracing technology advancements, manufacturers not only address today’s challenges but also position themselves for long-term success.

As the industry continues to evolve, the importance of strategic partnerships and innovative solutions cannot be overstated. From enabling predictive maintenance with multisensory AI to breaking down barriers between OT and IT, these initiatives transform manufacturing as we know it—delivering smarter, greener, and more resilient operations.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

AI: The Secret Ingredient to Foodservice Success

There are some days when you want to take a proper break at lunchtime. Then there are the days when you want—or need—to just grab something and go. If that something should happen to be a salad or a container of soup—a item without an easy barcode, that is—your quick, on-the-run meal is going to be hampered by a slower checkout process (or if not yours, that of the salad-eater in front of you). But AI-powered solutions do exist for the foodservice space, beyond those you might find at a drive-through.

This is AI technology that can really improve your day-to-day quality of life in a small but significant way, not merely technology for its own sake. And it’s not just the customer experience that can benefit; there are advantages for a foodservice staff and for foodservice businesses as well.

Sergii Khomenko, Founder of touchless self-checkout solution provider Autocanteen, serves up an AI-powered self-checkout solution for us. He discusses challenges the foodservice industry wrestles with, efficiencies the Autocanteen technology can tap, and how the human touch is still important to hospitality (Video 1).

Video 1: Autocanteen discusses how AI is transforming the foodservice industry. (Source: insight.tech)

Please explain the benefits of self-service in foodservice today.

Self-service solutions like Autocanteen, they obviously help to maximize and enhance the capacity of teams during peak times. Before an AI self-service solution, it wasn’t possible to do self-service for transactions within hospitality because some of those products, they’re not labeled and there are no barcodes on them. But with the technologies of computer vision and machine learning, it’s possible to identify those products very, very quickly—in a very similar fashion to the way humans do it—and then present the total and automate that process.

Operationally, the main challenge is that people tend to get hungry at very similar times, and so they all turn up for lunchtime or for breakfast, and it’s hundreds or thousands of people. You need to serve them with food quickly, and then you also need to complete the transaction very quickly so that the food doesn’t get cold and so that people don’t waste their time.

Also, imagine if you’re short on staff that day. Labor is always a factor for any operation, for any team, so it’s important to think how you can utilize the capacity of your staff in the best possible way to enhance speed and to enhance customer service.

Those are, I think, the main challenges that the foodservice operators tackle during service times, and with our solution we help to remove some of that load from their shoulders and help them to get through that service more quickly and more efficiently.

How does the Autocanteen AI-automation solution work?

Traditional checkout points, manned checkout points, they take up to 30 seconds per transaction, and that is not very fast—only two customers per minute checking out on each checkout point. With an automated solution we can do it three to four times faster; one checkout point can process four to six customers per minute.

That’s where the automation really, really shines, and that’s where the Autocanteen solution can really bring benefits—otherwise it’s tricky to get the same speed and efficiency. Our terminals can identify every item in the transaction within a second. As you place the tray or the retail items in front of the terminal, the algorithm will take the inputs, and within a second it will prompt the total on the screen. Then of course it takes a couple of seconds for the customer to acknowledge the total and pay it, but we can still end up with a sub-10-second transaction from start to finish, and with the receipt.

“#Hospitality, it cannot run without people, and it shouldn’t; I’m a big believer about that. However, those functions that are very repetitive and monotonous, where they can be automated, they will.” – Sergii Khomenko, @autocanteen via @insightdottech

Some of the locations that we support, they cater for 2,000 people. Imagine what kind of operation it is to feed all of those people within one or two hours. Using the Autocanteen solution, they can have 20 to 30 transactions land within a minute at peak time with just a few terminals; to achieve the same speed of service it would require a lot more manned till points. So the numbers just speak for themselves.

If the checkout process is much quicker, it only increases customer satisfaction because it’s done in a very quick and efficient way and the food doesn’t get cold. Some people see it as a sort of magic machine that processes their transactions.

And imagine how powerful staff members can feel. Previously they could only process two people per minute, but now they’re just keeping an eye on a much more efficient process. We pretty much give the team more eyes, if you like, to do the job and give them more processing power. They can do it all by themselves, just keeping an eye on the flow. So there are multiple benefits of such an implementation.

What type of technology makes this automation possible?

Our solution relies fundamentally on computer vision and machine learning to identify meals or retail items very, very quickly and efficiently. Computer vision feeds the input into the algorithms, and then comes the analysis, the decomposition, the classification, and the learning. The terminals, they are all interconnected within the same location, within the same account; once you’ve taught one machine, the other machines can utilize the same knowledge.

And all the terminals are managed via a web-based admin panel, so you can make changes to add products, make pricing amendments, view reporting, or see how your sales are going. That’s all available at your fingertips, and you get the information synchronized pretty much instantly. That’s what the technological offering looks like, and those are its fundamental components.

What about the value of leveraging technology from partnerships?

We are super thankful to Intel and to that partnership: We rely on their components in our software and our hardware. And one of the Intel components that we use is OpenVINO. It’s blazing fast, and we can pretty much make decisions and computations on the fly. It really shows the difference, comparing it to other frameworks that we’ve tried before, and we can only recommend it highly. OpenVINO has been an integral part of our solution, for sure.

And apart from the partnership with Intel, we have others that help us with things like payment processing. On the hardware side, we work with vendors such as Elo Touch, which also has great products and a great platform that we run on. It also has Intel processors, and we prefer to run our applications on those processors. So this Intel partnership has been super helpful during our whole journey.

Can you share any of your use cases or customer examples?

There is one location in central London that processes about a million pounds a year and serves hundreds of thousands of customers. They saved a lot of labor effort that used to be needed to process those transactions, and now those people can focus on better customer service or on helping elsewhere, instead of being at the tills. In addition, that company measured that the Autocanteen solution saved thousands of hours in customer queuing time.

Restaurant Associates—part of Compass Group—is another great partner in the UK. And Restaurant Associates just won an award at the Cateys, a prestigious award in the UK for the catering and foodservice industry. The award was “Best Use of Technology” by a foodservice operator, and that was for the implementation of Autocanteen and of our solution. So that was fantastic recognition, obviously, within the industry that it was the best solution on the market.

Our journey started in 2020, during that first pandemic year. At the time, Aramark was looking for a fast and touchless AI self-checkout solution, and we had just that. Fast-forward four years, and our terminals, they’re helping operators such as Compass Group, Dussmann, Delirest, and others to enhance their foodservice. It’s also within banks, insurance companies, the Ministry of Defense, factories, warehouses, entertainment. So it’s been a great journey.

Are there other industries that these self-service capabilities can be applied to?

We are already helping some retail sites to enable either fast transactions or just 24/7 capability for unattended transactions. For example, there are micromarkets that just have some products and a couple of terminals that customers, say, hotel guests, can use at any time.

How do you envision the foodservice industry changing?

If we look at the future through the prism of automation and what’s going to happen in the coming years, of course we cannot say that every function is going to be automated. Hospitality, it cannot run without people, and it shouldn’t; I’m a big believer about that. However, those functions that are very repetitive and monotonous, where they can be automated, they will.

We are bringing this into play for transactional functions right now, of course, but we can also see these efforts happening in the kitchen, in the back of the house. There are actually some companies that are already offering robotics solutions for making meals, preparing pizzas, and so on. So, repetitive tasks, they will be automated, but technology is not going to entirely replace people in hospitality, that’s for sure.

Related Content

To learn more about self-checkout, listen to Serving Up the Future of the Foodservice Industry and read AI Self-Checkout Redefines Food Service Efficiency. For the latest innovations from Autocanteen, follow them on X/Twitter at @autocanteen and on LinkedIn.

 

This article was edited by Erin Noble, copy editor.

AI at the Edge: Take the Leap in Healthcare

What if the future of healthcare weren’t about replacing clinicians but empowering them to do more, faster, and better? From addressing clinician shortages to processing massive volumes of medical data, AI is rapidly moving beyond diagnostics and imaging into areas like workflow optimization, multimodal input, and real-time decision-making.

In this podcast, we dive into how AI scales at the device level, enhancing patient outcomes, and uncovering actionable insights from one of the largest data-generating sectors in the world. Discover how edge computing enables faster, more secure data analysis right where care is delivered, and learn how these advancements shape a more efficient and innovative future for healthcare.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guests: CCS Insight and Intel

Our guests this episode are: Ian Fogg, Research Director of Network Innovation at CCS Insight and Alex Flores, Head of Global Health Solutions Vertical at Intel. Ian joined CCS Insight in 2024, where he leads the research firm’s analysis of network infrastructure. Alex has been with Intel since 2002. In his most recent role, Alex and his team focuses on helping partners and customers solve complex problems using Intel technology.

Podcast Topics

Ian and Alex answer our questions about:

  • 2:01 – The latest CCS Insight healthcare AI report findings
  • 3:53 – Healthcare industry trends and observations
  • 5:34 – Adopting technologies safely and securely
  • 7:57 – How AI at the edge addresses healthcare data
  • 12:28 – Strategies to bring AI into the healthcare industry
  • 14:25 – The value of working with Intel
  • 16:28 – Examples and case studies of healthcare transformations
  • 20:08 – How AI usage will evolve in healthcare

Related Content

To learn more about healthcare at the edge, read about the latest innovations in patient care, monitoring, and wellness. For the latest innovations from CCS Insight, follow them on X/Twitter at @ccsinsight and LinkedIn. For the latest innovations from Intel, follow them on X/Twitter at @intel and LinkedIn.

Transcript

Christina Cardoza: Hello, and welcome to “insight.tech Talk,” where we explore the latest IoT, AI, edge, and network-technology trends and innovations. As always, I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re going to be discussing AI in healthcare with our good friend Ian Fogg from CCS Insight and Alex Flores from Intel. Hey, guys, thanks for joining us.

Alex Flores: Thank you for having me.

Christina Cardoza: Before we get started, we’d love to learn a little bit more about your and the companies that you work at. So, Alex, I’ll start with you. What can you tell us about yourself and what you do at Intel?

Alex Flores: Sure. So, my name is Alex Flores. I’m the Director of the Health Solutions Vertical at Intel. At Intel, we’re not clinicians or former clinicians; we’re engineers and we’re technologists, and we’re driven by the intersection of technology and healthcare. And what we do is we work with the ecosystem to look at how some of the biggest challenges can be solved with technology.

Christina Cardoza: Great. Looking forward to diving into that in just a bit. But before we get there, Ian, you’ve been a friend and guest of the show numerous times now, but for those of our listeners who have not listened to those other great conversations we’ve had on the “insight.tech Talk,” what can you tell us about yourself and CCS Insight?

Ian Fogg: So, CCS Insight, we’re essentially a technology-research firm, an industry-analyst firm. I’m a Research Director here. Our role, really working with insight.tech, is to interview key players in the market, research some of the changes—like, for example, AI arriving in healthcare—and then communicate those and write those up in a form that’s easy to pick up and use.

Christina Cardoza: Of course. And on that note, CCS Insight has a research paper coming out on this very topic, which is why we have you joining us today. So, what can you tell us about that report, and if there were any particular findings in this space that stood out to you?

Ian Fogg: I think just how extensive AI usage in healthcare already is. It’s something that I think is recently, in the last couple of years, really arrived on the popular mindset, in the mainstream media. But it’s clear that in healthcare it is well embedded, it’s widely used, and it’s growing across all categories. I mean, it was one of the striking statistics that I think we found in the research, the report, that as of August 2024 there were 950 AI-enabled medical devices that have been approved by the FDA across categories. That’s an enormous number, and of course it’s growing all the time.

I think the other thing that’s really striking about what’s happening here is how much it’s moving from diagnostics and imaging and research into other parts of the healthcare ecosystem. So, organizational tasks, room management, tying together disparate systems, and also things like multimodal input—so, transcribing conversations that would otherwise never be recorded. There’s just this enormous, burgeoning range of activities really right away across the sector.

Christina Cardoza: It certainly is interesting, and you bring up a good point: It’s not only the devices and not only necessarily directly related to the healthcare, but there are things in hospitals or in organizations and offices and buildings that we can add AI within the healthcare space to really improve operations and inefficiencies. And, like you said, a lot of this has been ongoing; we’re only hearing a bit about it in the media. I think that’s some of the best implementation of the technology, is when it’s happening. But as a consumer or as a user you don’t see it happening right up front.

Alex, I’m curious, based on some of the findings Ian just mentioned, is that what you’re seeing in the space from an Intel and engineering perspective?

Alex Flores: Absolutely. We are seeing AI being rapidly adopted in healthcare, and it’s really going across the board—whether it’s a patient registering or checking in, for example, there’s AI analytics going on in the background trying to—whether it’s pre-filling in your forms and so forth or gathering data from past visits and so forth. Or in the actual clinical workflow—whether a patient is receiving some type of healthcare, whether the doctor, for example, is transcribing notes and so forth. So it goes across multiple workflows.

And I think what’s interesting is a lot of it is really behind the scenes, which is where we want it to be, because ultimately what it’s doing behind the scenes is impacting the clinician’s workflow, allowing them to do their job faster, better, easier, so they can spend more time with the patient.

Christina Cardoza: That’s great. And those benefits are a lot of things that I’ve seen writing for insight.tech and for different industries—manufacturing, retail—trying to get those benefits from AI. I think healthcare is an interesting space; it presents a lot of interesting challenges and complexities just because you’re dealing with a different environment, regulations, patient-sensitive data.

So, can you talk a little bit about how the healthcare space is able to adopt these technologies in a safe, secure, efficient way?

Alex Flores: Yeah. Before I jump into that, I think there are a couple of data points I wanted to mention. I think what’s unique about healthcare is—a lot of people don’t realize this—that from a data perspective, roughly about one third of the world’s data is being driven out of healthcare. And then there’s evidence out there that maybe roughly about 5% of that data is actually turned into actionable insights. So there’s this tremendous opportunity to use AI to really kind of unlock those insights from that data.

The second thing is you layer in some of the macro trends that are happening globally across healthcare. Those include an aging population; they also include—people are getting sicker, there’s more and more people that are being diagnosed with multiple chronic diseases. And then you also layer the fact that there’s a global shortage of clinicians—both doctors and nurses, for example.

So with that, the need for AI becomes even more important, and the rapid adoption of that becomes more important, because what it’s doing is it’s allowing clinicians to increase their efficiencies—whether it’s the workflows, whether it’s being able to triage patients, and so forth. So for me, that’s where AI is going to be crucial in order to continue to help alleviate some of those issues and really allow our clinicians the benefits to do that. And then, if it’s implemented correctly, you don’t have to worry about some of the regulatory. Again, it’s really there to benefit the clinicians so that they can allow themselves to really focus on the patient and patient outcomes.

Christina Cardoza: That’s an interesting perspective on it. I want to go back a little bit to the data points that you were talking about—all the data that’s coming from the healthcare sector. And then I imagine with devices coming online, or more devices being AI-enabled, that’s just giving us even more data, which I’m sure AI is helpful for, being able to sort through some of that data.

But Ian, I’m curious if you can touch on this growing amount of healthcare data that we have and how AI and AI at the edge is really going to come into play—if there’s anything from that report you found in this space.

Ian Fogg: I think there’s a few things here. I think, just touching on something that Alex mentioned there, I think what AI is doing in many areas is not replacing clinicians: It’s making the clinicians more efficient; it’s taking load off them. And you can see that in the way that data is being analyzed. You can see data volumes going up enormously.

One study I think was talking about that the size of a CT scan could be 250 megabytes for an abdomen; a staggering one gigabyte for the heart; digital pathology could be even greater, if you’re looking at cells of 2.5—those are enormous, enormous amounts of data for a single scan. If you compare that with a smartphone camera, that might be a five-megabyte image.

And one of the other things that’s striking, though, is you can’t use the same techniques to compress medical-imaging data that you can use for a photograph, because the tools used to compress a photograph are lousy tools; they lose data, and they lose it based on what the human eye doesn’t perceive. They’re perceptual-compression algorithms. You can’t do that for medical; you have to look at the full image, because you need to have all that detail so you can spot irregularities in the scan. And so that just makes the challenge even harder.

Then you’ve got—well, you’ve got this enormous amount of data. AI has two slightly competing implications there. One is that it means you can analyze that data quicker, so you can have an efficiency of speed benefit. But one of the companies we talked about framed it the other way around and said, “Look, actually, because you’ve got this AI tool that can analyze more data, what you can actually do is analyze a greater part of a biopsy.” Which means that if there’s just a few cells that are irregular in a cancer scan, you are more like to spot them because you’ve scanned a bigger sample. And that means your scan is more accurate, which means you’ll identify problems and healthcare issues earlier, and you’ll save costs and load on the healthcare system kind of down the line. So there’s some interesting dynamics there that are striking.

The other piece is that when you want that to be a very responsive experience for the clinician, if you can do it at the edge rather than the cloud, you can make that faster and you can—it’s easier to make it private because the data can stay closer to the patient, closer to where it’s being captured.

And that’s a trend we’ve seen in many areas with AI, where we’ve seen things start in the cloud, and then as edge devices get more capable you move those things onto the edge devices for that performance benefit. So there’s all kinds of interesting dynamics here when you start looking at the data. Do you make it a fast experience? Or do you use AI to analyze a greater amount of the data to improve the quality of what you’re doing?

Alex Flores: Ian, you bring up some really interesting and valid points too. I think one of the things that I did want to emphasize is what you were talking about—speed, quality, and so forth—and I think that’s where a lot of new compute technology comes into play, as well, that’s working in the background.

So, for example, latency. When a radiologist brings up an image, if they’re triaging something, they want to be able to see that image in real time or near real time, because every second counts, obviously, for a lot of these healthcare providers. Technologies like compression and decompression, again, these are all working in the background. But as we work with a lot of the different ecosystem players, a lot of the leading medical-device manufacturers—that’s what Intel is doing kind of in the background, is really looking at how we can optimize their technology, their workflows, their algorithms, and so forth, so it gives the clinician that real-time or near-real-time experience that they need. And if it’s done correctly, it’s seamless, so they can go about their job as quickly as possible.

Christina Cardoza: And I’m sure when you’re thinking of cloud versus edge, it depends on the device or depends on the outcome that you’re trying to get. Do you need the real-time metrics and insights to have it on the edge? Or is it quality and being able to go through all the data and that being on the cloud?

So I know, Ian, you were talking about different approaches to dealing with things especially in healthcare—and so we have the edge, we have the cloud—but are there any other strategies that healthcare providers or people in the healthcare space or even patients can implement to bring AI into the healthcare industry and any best practices there?

Ian Fogg: So, two things jump out. One is just this use sense, that it isn’t just about imaging and scans and that computer-vision piece. We’ve seen a lot of examples now of AI being used to make the organizational aspects, from healthcare to the hospital, more efficient. Things like operating theaters are incredibly costly assets, and if you can schedule the cleaning and sanitization teams efficiently, you can reduce downtime between operations. And that came up in one of the interviews we did for the report.

The other thing I think that came up very strongly was what’s called federated learning—this idea that when you have a machine learning model, you want to maintain the privacy, but you want to use a diverse and broad data set to improve the quality of the AI model. And a federated learning approach means you can have potentially multiple hospitals or multiple healthcare facilities contributing to the model, but where the data that’s used to improve the model remains within the facility.

And that’s something which enables the AI model to become much more capable, much more sophisticated, but still works within the environment you need around privacy and management. We’ve seen that in some other areas, but it’s particularly relevant in the healthcare space.

Christina Cardoza: It’s interesting, as you’re talking about these approaches and strategies and the benefits that the healthcare space gets with this, I’m brought back to what Alex was saying in the beginning: how you aren’t healthcare providers; you’re engineers. And then we have these healthcare providers implementing this technology.

So, Alex, from an engineering perspective, how can hospitals, healthcare providers deploy AI? What challenges or opportunities do they face in this space and working with an engineer like Intel to make it happen?

Alex Flores: Yeah, I think, oftentimes, when we’re working with customers in the ecosystem, really it starts with giving them choice, giving them options when they’re deploying AI. As Ian mentioned, a lot of organizations are deploying in the cloud, and that’s great; it’s a tried method. Innovation is happening all the time. The cloud is obviously been in use for decades.

There’s other organizations that are kind of taking a hybrid approach, right? They want the benefits of the cloud, but they also want the benefits of being able to access data real time or near real time at the edge. And then there’s other customers that are looking at an edge-only approach, where they’re concerned, as Ian was saying, maybe it’s cost, maybe it’s security reasons and privacy, and so forth.

So for Intel, what we really want to do is walk them through the different options, and specifically when we get to the edge. What makes the edge so attractive is that access to that data, that access to patient data at a real time or near real time so clinicians can take advantage of it—especially if they’re trying to triage a situation for a patient. So having that access to that data, being able to run the correct analytics for that data at that moment, becomes very crucial.

And then at that point they can determine, “Okay, do I need to save this data? Does it need to be stored in the cloud? Can I send it to maybe a local data center?” and so forth. So for us it’s really showing the customers the ecosystems, the choices, some of the benefits, and then seeing what’s best for their particular implementation.

Christina Cardoza: To paint a picture for the audience here, do you have any examples or case studies you can share with us? And you don’t have to name names, but anyone that came to you, you gave them these options, what the options were, what they chose, and what the result was of that?

Alex Flores: Yeah, we have many different options, which makes my job really exciting, to be able to see some of these technologies being implemented. And I’ve actually had the benefit of seeing some of them actually in play. One that comes to mind is patient positioning. So, for example, when a patient is getting a scan and they lay on the table, what happened before in the past is oftentimes the patient wasn’t positioned correctly. So then that would cause the technician to have to redo the scans. And then obviously for various different reasons now it’s taking longer, because the patient has to get rescanned, for example. The patient may be exposed to additional radiation that they shouldn’t have been exposed to and so forth.

So, having AI-based algorithms that help the technology position the patient correctly before the scan—that’s one example. Second one is around accurate contouring of organs at risk. One of the major bottlenecks for radiation therapy is doing this contouring of these organs. And often based on the image quality and so forth there can be a lot of error in that. So having AI-based contouring is another area that really can help the clinicians speed up their process; it really can help automate some of these different tasks and so forth.

A last example that I have is on ultrasound, for example, and this is a real story. So, my wife, she had a procedure a couple years ago, and I remember we were driving back and I asked her how the procedure went. And she started describing a situation where she said, “The anesthesiologist came in and they used an ultrasound machine to identify the vein where the anesthesia would be administered.” And I got really excited because I said, “Oh, I know exactly what algorithm that was, because we were working with the ultrasound manufacturer to optimize that.” Essentially, that algorithm was used to help identify the vein, to avoid sitting there having the clinician do multiple insertions of a needle before finding the vein.

So those are just three examples; the list goes on and on. That’s why, again, I get so excited about my job, is seeing that practicality of the technology being implemented with a solution.

Christina Cardoza: That’s great, seeing it out in the wild, too, and having a personal experience with some of this technology that you’re working on. Definitely something that I could have used or that I can’t wait to see out in the real world. I’ve had three children, and my second one, they poked and prodded my arm—it was black and blue—they couldn’t get an IV line. So that my third one, I was like, “I don’t even want one, don’t even put it near me.” So, it’s really—I can’t wait to see some of this stuff be more widely adopted.

And we’re talking about diagnostics and imaging and other areas, but like you said, there’s so many options and so many different places AI and healthcare could go. You guys mentioned a couple times dictating notes for doctors and things like that. So, Ian, I’m curious, from a research perspective, where is this space going? Do you have any future-looking ideas on how AI usage is going to continue to evolve in healthcare?

Ian Fogg: I think this could evolve in many areas. I mean, that ultrasound example, I think it’s a fascinating example because ultrasound’s a very cost-effective, very accessible type of scanning. And what you are doing with that is you are making a tool that’s been around for decades more effective, and that that is augmenting an existing tool. It’s a fascinating example.

I think some things we’re clearly going to see, we’re going to see cloud-based AI continue, but we’re going to see increasing use of AI on the edge, too, for that responsiveness piece. The other I think we’ll see is we’ve seen these very large AI models, we’ll see more smaller-focused models come to market for a particular task or use. And they become more portable, they become even easier to put onto edge devices. And we’ve seen that in other fields outside of healthcare too.

I think we’ll see this multimodal element. So, multimodal means audio-based, video-based, still-image-based, and text-based. And that means both a way of interacting with the model but also what the model is able to understand and perceive about the world. So it might be able to perceive—use a camera to identify if there are queues forming or people forming in certain parts of the hospital.

The transcription piece is interesting. That means you are capturing information that may otherwise never have been captured, maybe a patient-doctor conversation. And then you can summarize that conversation so you can add things into the medical record that maybe aren’t being captured at the moment, but also make it accessible and surfaceable and findable later.

I think there are other things beyond that. AI is very good at correlating trends across different data sets. This could be used in a public health context more. AI models can’t do causation, so when you find those correlations, you’ll still need to go and push them in front of a researcher, a clinician, to validate that it’s a real thing, not just one of those random correlations, but it will probably uncover underlying causes for conditions, new ways of approaching healthcare that we haven’t thought about before. And then there’s just so many uses. You can look at this really right the way across, everywhere that technology’s being used in a hospital facility, I think.

Christina Cardoza: Yeah, so many opportunities. And in this conversation we stuck a little bit to the devices and the implementation and the data aspect of it, but I’m sure we can go off in many different directions. We’re talking about the AI models, the size of the A models, what they can do, but at risk of opening up a can of worms—because I’m sure we’ve only scratched the surface, and we can keep going on and on—I’m going to end the conversation here. But what I’d love to do is throw it back to each of you for any final thoughts or key takeaways you want to leave our listeners with today as they prepare for the AI evolution in healthcare or what they can expect to see. So, Ian, I’ll start with you on this one.

Ian Fogg: I think one of the big things here is we’ve seen a lot of hype around internet-based LLMs. I would say, don’t be discouraged by the quality of those things like ChatGPT or Gemini or Claude. I mean, when you start looking at these medical AI models, they’re typically trained on pre-validated data sets, not the internet, so the accuracy level is much greater.

I think additionally we’ve seen some things come through where you can use one AI model to validate the output of another AI model, and that can raise the quality of the output too. So that quality piece that you might see when you are playing with stuff online isn’t applicable here; this is a different kind of space. And in some cases people are using in-house, open source–based models, so they have greater control ownership of it too. So, don’t be discouraged by what you might see in other areas—on your phone or on your computer or on the internet. This is a different space. The quality here is much, much higher.

Christina Cardoza: Awesome. And, Ian, I’ll make sure to provide a link out to that report, so that those listening, they can learn and dig into some of the things that we were talking about even further.

Alex, before we go, any final thoughts or key takeaways you want to leave our listeners with?

Alex Flores: Yeah. I think Ian mentioned a really good thing, and that’s the miniaturization of AI. And essentially we’re going to continue to see that pattern, and we’re going to learn what is the right AI at the right time at the right device.

And the other thing that I wanted to also mention is when you’re doing AI at the edge on the device, power becomes a really important feature. Because if you think about it, it’s kind of a snowball effect. More power, you need the bigger fans, you’re going to need the bigger device, the new form factor, and so forth. Oftentimes you don’t need that; you can run the right amount of AI at the edge without needing to redesign or reconfigure your device. There’s new technology, new compute that allows you to do that.

So, as we continue to evolve, as more and more artificial intelligence goes to the edge, it’s going to be easier and easier to run at the edge.

Christina Cardoza: And I think deploy AI at the edge also. Like you said, these devices and technology, it’s getting smaller. It’s amazing what you can do with the infrastructure you already have and without a lot of hardware, a lot of equipment.

I can’t wait to see where else the space goes—other innovations and technologies from Intel. So, I just want to thank you both again for joining us today and for talking about this topic. Thank you to our listeners also for coming in and listening. Until next time, this has been “insight.tech Talk.”

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Telecom Compute Infrastructure at the Network Edge

From private 5G to online gaming, telecommunications companies see a growing demand for innovative services in a wide range of markets, driving new business and revenue opportunities. But these services require cutting-edge functionality such as real-time data processing, low latency, and energy efficiency, which can be difficult to achieve.

And deploying these systems in hundreds or even thousands of geographically distributed locations requires the ability to manage and operate deployments at mass scale.

With a localized, performant compute infrastructure—essentially a mini data center—telcos can overcome these challenges and gain the power and flexibility to support advanced workloads where they’re needed: at the far network edge.

To successfully build an edge computing solution, businesses need infrastructure. Typically deployed outdoors, housing, power, computing, storage, and connectivity must be built for harsh environments.

And of course it needs software to process, store, manage, and protect data. NodeWeaver, a software-defined edge operating platform provider, aims to solve these challenges and minimize total lifecycle cost by addressing the main drivers of cost and complexity—acquisition, deployment, and management.

Finally, telecoms, enterprise organizations, and multi-tenant operators are not in the business of building out this type of infrastructure on their own. What’s needed is a customizable all-in-one solution that serves a diverse set of needs and doesn’t require a legion of engineers to deploy.

“This use case represents an important and tangible example of an #edge deployment, which many within the #telco industry have talked about but few have actually deployed.” – Mitch Kitay, @nodeweaver via @insightdottech

Partners Pilot a 5G Edge Computing Platform

Three leading technology companies—alongside Intel—collaborated on Street Edge, a street-side telecommunications system, demonstrating how a fully integrated, multiuse platform can be built and deployed. Together, Colt Technology Services, a global digital infrastructure provider; CIN, a communications Infrastructure company; and NodeWeaver launched a proof-of-concept in the heart of London.

“This use case represents an important and tangible example of an edge deployment, which many within the telco industry have talked about but few have actually deployed,” says Mitch Kitay, Business Development Executive at NodeWeaver.

Each company brings their individual technologies, products, and expertise to the telecom system.

The Colt Network global digital infrastructure platform enables a number of services such as internet access, point-to-point Ethernet, dedicated cloud access (DCA), and time synchronization, powered by Colt’s on-demand network-as-a-service (NaaS) platform. The connection to the Colt Network allows Street Edge to support far-edge compute alongside public and private 5G networks, Wi-Fi, and IoT networks.

For networking services, Street Edge can be outfitted with a fiber bundle that uses a fiber distribution patch panel for connectivity flexibility to Colt services or to dark fiber for building private fiber networks. In the pilot system, Colt configured a 100Gbps connection to the Colt Network, a 400Gbps global backbone network that connects more than 230 cities; 1,100 data centers; and 32,000 buildings worldwide.

The CIN street-side telecommunications enclosure, Street Arc, is purpose-built to support mobile telecom networks, Wi-Fi networks, Edge and IoT Networks, and edge computing (Figure 1). The configuration includes support for up to nine 4G/5G radios and multiple edge servers, as well as cooling, fiber, and power.

Installed in the cabinet is an Advantech edge network appliance—powered by 4th Gen Intel® Xeon® Scalable Processors—with eight 10GbE ports for connectivity to a co-located Cisco Systems network switch that serves as an interconnect for the networked systems within Street Edge.

NodeWeaver’s Edge Operating System, running on the Advantech appliance, orchestrates all Street Edge applications, delivering an edge-native experience with resilient, agile, and scalable compute clusters capable of running multiple virtual machines and container-based workloads.

With edge compute for tenant applications, edge wireless network services, and fiber-optic network access to the internet or to Colt’s worldwide network, Street Edge demonstrates an innovative way to deploy services in urban areas.

CIN Street Side enclosure outside the Colt’s London headquarters.
Figure 1. CIN Street Side enclosure deployed outside Colt’s London headquarters (Source: CIN)

Network Edge: Location Is Everything

One example of the Street Edge pilot in action is with global multiplayer gaming infrastructure and hosting provider Edgegap. This is an ideal use case to prove out the overall Street Edge concept. The company provides its clients’ game servers and hosts them on a global edge network to deliver the best possible player experiences.

Edgegap algorithms find the optimal computing platforms based on player locations around the world. The Street Edge concept shows how a distributed cloudlike infrastructure makes it possible.

A key value, delivered by NodeWeaver, is orchestration of edge applications for multiple use cases and customers. Edgegap hosted games, and a telecom service provider application was tested on the same server simultaneously to demonstrate secure multi-tenancy.

“This is where our Street Edge solution comes into the picture, with its on-demand computing and network capabilities,” says Javier Benitez, Senior Network Architect at Colt Technology Services. “Edgegap is able to instantiate new games and bring them live in real time. This is possible with Street Edge because the algorithm selects the best location, offers the lowest latency, and the best quality of experience to the end users. This is actually the first game server in the industry to run on true Edge infrastructure.”

Edge Compute Technology Opens New Doors

Intel plays a fundamental role in the Street Edge platform, well beyond powering the Advantech Appliance. NodeWeaver takes advantage of technologies such as Intel® QuickAssist Technology (Intel® QAT), Intel’s Data Plane Development Kit, and the OpenVINO toolkit, for accelerated AI inference and maximum efficiency.

“We use Intel software to make sure these hardware capabilities are exposed to the workloads that run on top without requiring special devices, accelerators, or software libraries,” says Carlo Daffara, CEO and Cofounder at NodeWeaver. “We provide an interface that uses these technologies to allow a virtual machine, for example, to immediately recognize that there is an OpenVINO accelerator connected to the hardware and take advantage of it immediately.”

The Street Edge pilot is just the start. “We’re having more and more discussions about who could benefit from this kind of infrastructure,” says Ben Bloomfield, Cofounder and Head of Strategy at CIN. “We’re showcasing the potential for hundreds of customers to dynamically scale up and down as needed. With NodeWeaver, we now have a platform capable of supporting that level of flexibility and scale.”

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.