Build AI Applications Faster with a Low-Code Platform

Whether the goal is to speed office tasks or impress customers with chatbots, today’s businesses are increasingly eager to deploy AI applications.

Once launched, AI applications can be a boon to productivity. But creating them can be a time sink, especially for generative AI solutions, which are powered by large language models and image recognition systems that require extensive fine-tuning and testing.

Now there’s a better way to bring AI solutions to fruition. Using a low-code platform, businesses can develop custom AI applications much faster. Low-code applications are more straightforward to maintain and customize to accommodate future use cases.

Simplifying AI Solutions Development

In the competitive world of AI applications, timing is a critical factor, says Brian Sathianathan, Co-Founder, Chief Digital Officer, and Chief Technology Officer at low-code AI platform developer Iterate.ai. “A lot of companies want to be the first to market with innovative solutions. But it’s hard to do because their IT and technology teams already have their hands full,” he says.

Sathianathan and his colleagues founded Iterate to simplify the AI application-building process, shortening development time from months to weeks. “On average, it’s eight or nine times faster to take an AI idea from concept to reality,” Sathianathan says. “Less complex AI solutions can be created up to 17 times faster.”

Iterate simplifies the #AI application-building process, shortening #development time from months to weeks. @IterateAI via @insightdottech

Iterate saves time by creating pre-written blocks of code for various AI capabilities, such as chatbots, payment systems, or image recognition. Using the company’s Interplay platform, developers can drag and drop the code blocks into their solutions.

“It’s like building a luxury home from parts delivered on a truck,” Anton says. “We send you entire kitchens, bedrooms, and bathrooms, and you can put them together very quickly.” The code blocks are grouped into customized solutions for industries such as finance and insurance, retail, and automotive.

Saving Time with a Low-Code Platform

Interplay’s enterprise office solution, GenPilot, allows organizations to build their own generative AI large language models (LLMs) from internal data and documents. Many LLMs specialize in tasks, such as financial planning or logistics management, and GenPilot allows companies to select the models they prefer. Though public LLM solutions such as Chat GPT and Microsoft Copilot can also be used for generative AI solutions, some companies hesitate to upload their information to them.

“Public models are shared in a multi-tenant cloud environment. We provide a secure private environment, and companies can run their models on-premises,” Sathianathan says. Banks, insurance companies, and other organizations can also build in compliance rules governing data in various regions.

For employees, GenPilot saves hours of time by gathering and interpreting documents across databases. For example, if an insurance customer emails a company representative with a question, but neglects to supply their policy number, GenPilot will not only find it but determine how the policy applies to the question, how much the customer pays for services, and whether a change would alter the fees. It then composes a reply to the customer’s email.

“It responds intelligently in plain English,” Sathianathan says. Companies can set rules governing tone of voice and level of technicality.

For unstructured documents, such as PDFs, employees can use a different solution, the Interplay OCR Reader. This application translates images into machine-readable text and initiates workflows. For example, when bank employees upload customers’ scanned documents to the OCR Reader, it extracts relevant information and enters it onto a loan application form.

Streamlining Retail AI Management

One of Iterate’s latest solutions is Interplay Drive-Thru, which builds voice-enabled chatbots to take customer orders and make upselling recommendations at busy quick-serve restaurants (QSRs).

Chronic labor shortages often require QSR workers to perform multiple duties, packaging food, taking payments, and serving in-store customers as well as those at the drive-thru. “Chatbots give them a little more breathing room,” Sathianathan says. Orders are processed faster, shortening lines for customers and increasing throughput for restaurants.

Drive-thrus and other retailers can speed payments with Interplay’s LPR (license plate recognition) solution. Customers who opt in supply a photo of their license plate and credit card, and are recognized by computer vision cameras as soon as they arrive at a participating business. Interplay LPR, which complies with GDPR and other privacy regulations, is currently deployed at more than 1,000 gas stations and convenience stores in Europe.

“It will automatically open the pump for customers and charge them for gas. These actions happen within 30 milliseconds,” Sathianathan says.

Interplay’s LLM solutions are deployed on Intel® processors. Applications that run on high-performance CPUs are more cost-effective for businesses than those that also require GPUs, as many LLM solutions do.

“A system using only CPUs cost $2,500 to $4,000 per machine. An equivalent GPU/CPU combination would be $8,000 to $12,000,” Sathianathan says. Retail IT teams are also more familiar with standard operating systems, reducing training time.

Once a low-code solution is deployed, developers can easily move the same Interplay code blocks into new solutions, instead of having to sort through millions of lines of code to make changes. In addition, Interplay’s code blocks use the Intel® OpenVINO toolkit, enabling developers to optimize their AI applications more efficiently. “You can use up to 350% less compute power with OpenVINO. That’s a huge benefit,” Sathianathan says.

Bright Future for Low-Code AI Solutions

Today’s AI applications enable companies to automate processes in ways that would have been unthinkable just a few years ago, Sathianathan says. “AI solutions can do sales calls. They can generate legal documents, which are traditionally expensive to produce.”

Using low-code building blocks, small businesses as well as large enterprises can develop solutions like these quickly and affordably. That will help to expand the reach of AI applications and level the playing field, Sathianathan says: “Very soon you will see many new automation capabilities being developed. Startups will be able to punch above their weight, and costs will continue to come down for everyone.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Built-in Functional Safety Speeds Robot Development

In today’s factories and warehouses, robots are no longer fenced off from humans. The two often work side by side, with robots taking over arduous tasks like transporting heavy objects—or tedious ones, like spray-painting or palletizing goods.

These collaborative robots, or “cobots,” increase efficiency and reduce the risk that workers will develop muscle strain or injuries. But ensuring that they interact with people safely is not an easy accomplishment. Robot developers can spend years building, testing, and retesting safety features, which must meet rigorous certification requirements. That delays them from releasing models with the latest and greatest capabilities and leads to longer time to revenue.

But now there’s a way for machine builders to bring their robots to market sooner. Building them with pre-certified processors, control boards, and electronics can spare them months or years of extra work.

Faster Robot Development

Critical systems like robots that work in cooperation with humans must include robust functional safety (FuSa) controls. FuSa is an international standard methodology for automatically detecting and mitigating electronic system malfunctions in critical systems—in this case, malfunctions that could harm people. For example, if a robot’s FuSa system indicates that it is veering off-course or traveling too fast, it will send a signal to stop all moving parts.

To gain approval for their cobots, developers must build in FuSa controls for every action they perform that could affect people. Their traveling velocity, the amount of force they use to grasp an object from a human hand, the torque they exert in rotation—these and many other variables must meet exacting ISO standards, and sometimes country-specific ones as well. Both hardware and software related to functional safety must obtain certification.

For the hardware, each of the thousands of electronic components in a robot’s embedded computer must obtain certification from a qualified institute. If a developer builds robots from scratch, the process can take several years. “That’s why we created a safety controller that uses pre-certified CPUs,” says Weihan Wang, Manager of Robot Products at NexCOBOT, a NEXCOM International Co., Ltd. company and developer of intelligent robot control and motion control solutions.

The NexCOBOT SCB 100 safety control board contains pre-certified Intel Atom® x6000 series processors, saving time for both NexCOBOT and its developer customers. “We don’t need to prove the CPU is safe because Intel has already done that,” Wang says. In addition, the entire SCB 100 board itself is FuSa certified.

Along with its silicon and software, Intel provides technical documentation such as safety manuals, safety analysis, and user guides, which also makes the certification process faster and simpler.

With all the hardware pre-certified, robot builders using the SCB 100 board can develop their applications immediately, instead of waiting for hardware approval first. They can further speed software development using built-in Intel software libraries, which enable them to easily import existing applications and develop customized safety protocols for capabilities to fit specific customer needs.

As #robots learn to do more, their interactions with humans start to look less like command-and-control and more like teamwork. @NEXCOMUSA via @insightdottech

Ensuring FuSa for Critical Systems

The SCB 100 control board safeguards robot actions with Intel® Safety Island (Intel® SI), which is integrated into the Intel processor. Safety Island supports functional safety, orchestrates Intel-on-chip diagnostics, reports errors, and monitors customer-safety applications. When a robot is in operation, Safety Island constantly checks its calculations for direction, speed, force, and other factors in real time to make sure it operates correctly. “There are over a hundred different issues that could cause a problem, including a power deviation or a memory failure,” Wang says.

If a safety error occurs, the system brings the robot to an immediate halt and sends feedback about the problem to the operator’s systems integrator.

The processors have the performance power to run multiple AI and computer vision workloads—combining non-safety motion control with safety applications. This allows developers to build in more functionality while saving space and money. The result is a lighter, more compact robot that is easier for customers to install and deploy in tight spaces.

The Future: Robots as Partners

As robots learn to do more, their interactions with humans start to look less like command-and-control and more like teamwork. For example, instead of using a handheld safety pendant for robot training, a developer may directly hand the robot an unfinished part, then guide it to a CNC machine to deposit for contouring.

“In the future, there will be more and more collaboration between humans and robots,” Wang says. In the next five to 10 years, he expects to see “humanoid” robots—with artificial arms and legs—working alongside people in factories, shops, and warehouses.

The more duties robots assume as they work with humans, the more built-in safety they will need. Regulators who once required developers to provide two or three FuSa controls are already asking for more than 30, Wang says. As more-capable robots march onto factory and warehouse floors, the pressure to release models with advanced features will increase. Using a pre-certified safety control board will help developers bring highly complex models to market faster.

Using high-performance chips will help, too, Wang says, adding, “High-end computing performance allows robots to execute lots of safety functions, and they can do it without using multiple CPUs.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

AI in Healthcare Advances Diagnostic Accuracy and Speed

AI in healthcare is changing the face of diagnostic medicine, helping doctors work more accurately to improve patient outcomes.

Use of edge AI in endoscopy procedures is a prime example. Endoscopy involves inserting a tube with a camera (endoscope) into the body to obtain images or video of the patient’s organs and tissues. Endoscopy procedures have multiple uses, but among other things they are a vital diagnostic tool to help gastrointestinal (GI) medicine specialists screen for cancer. Endoscopies allow these physicians to detect polyps, benign but potentially problematic growths, and in particular, adenomas, which are polyps that doctors consider precancerous.

But even the most experienced doctors may be challenged to reliably interpret images from an endoscopy.

“The medical literature tells us that physicians fail to spot polyps during colonoscopies at a rate of 22% to 28%,” says Sabrina Liu, Product Engineer at ASUStek Computer Incorporation, a global developer of diversified computing products. “It’s inherently difficult work: Some adenomas are extremely small and hard to see while polyps have different morphologies that can make them easy to miss on a video feed.”

In addition to the technical challenges of endoscopies, there are also basic human limitations. For example, a doctor at the end of a long shift might be more fatigued and prone to mistakes than at the start of the day. And a junior clinician is unlikely to be as proficient as a more experienced colleague at interpreting medical imagery.

Today’s innovative solutions use edge AI and computer vision to enhance traditional endoscopy equipment. And these systems have already been deployed in real-world clinical settings—with promising results.

Clinical Deployments Demonstrate Improved Accuracy

The ASUS Endoscopy AI Solution EndoAim, currently used at multiple hospitals in Taiwan, is a case in point.

The system highlights AI-detected polyps on the screen in real time by analyzing up to 60 images per second, calling attention to anything the physician may have missed. If they want to inspect a region of interest more closely, they can switch to narrow-band imaging (NBI) and the system will automatically classify selected polyps as adenomas or non-adenomas. Doctors can also use the system to perform one-click measurements of polyps, whereas before they typically determined polyp size by visual judgment, which had a relatively low accuracy of approximately 62.5%.

The results of the solution in clinical settings are impressive. “Physicians have seen their adenoma detection rates improve by 15% to 20% on average,” says Liu. “There is also a significant improvement in detecting small polyps—as well as time savings, because doctors can now measure polyps more quickly and accurately during endoscopies.”

Using #EdgeAI to improve the accuracy and diagnostic consistency of endoscopies will likely appeal to many physicians—and the physical features of these systems add further incentives for adoption. @ASUS via @insightdottech

AI Toolkits, Edge Hardware, and Collaboration Speed Time to Market

Using edge AI to improve the accuracy and diagnostic consistency of endoscopies will likely appeal to many physicians—and the physical features of these systems add further incentives for adoption.

EndoAim is based on a miniature edge PC with a compact form factor of 12cm x 13cm x 5.5cm—a critical consideration in hospital examination rooms where space is at a premium. In addition, the system can be connected to existing endoscopy equipment without specialized medical hardware, making it easier and more cost-effective for clinicians to begin using AI immediately.

The ASUS partnership with Intel was crucial in developing a market-ready product. “Intel CPUs with integrated graphics processing helped us reduce our solution’s overall size—and achieve an image analysis rate of 60 FPS, which is the highest rate currently available to physicians,” says Liu. “Using the Intel® OpenVINO™ toolkit, we also optimized our computer vision models, enabling them to run more smoothly and efficiently.”

The two companies’ collaboration shows how technology partnerships make it possible to offer powerful solutions to medical device buyers—and do it faster than ever before.

“We started work on EndoAiM in 2019 and had an early model toward the end of 2020, which is when we turned to Intel for engineering support,” says Liu. “By 2021, we had the version of the product that we wanted to take to market.”

The Future of AI in Healthcare: GI Medicine and Beyond

The fact that solutions providers can innovate edge AI systems more quickly and effectively is good news for doctors, patients, and healthcare SIs, as it will no doubt enable other use cases in coming years.

ASUS is already at work on some of those new use cases with its current endoscopy system. Liu says the company plans to expand its computer vision solution to other aspects of gastrointestinal medicine, such as the analysis of imagery from the upper GI tract and the stomach. In addition, ASUS engineers are looking at ways to use AI to build solutions that go beyond detection and diagnostic support and enable the prediction of illness, helping doctors to catch potential problems earlier so patients can begin treatment sooner.

Beyond GI medicine, the underlying computer vision algorithms behind EndoAiM could eventually be applied to other types of medical imaging. “We see the potential to expand this technology to analyzing imagery from ultrasounds, X-rays, MRIs, and more,” says Liu. “There’s a tremendous opportunity to help people here, and we’re excited to hear from clinicians in different medical fields and see how we can develop solutions to meet their needs.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Digitizing the Manufacturing Supply Chain from End to End

Addressing supply chain inefficiencies continues to be a problem for manufacturers. Legacy systems and information silos cause inventory shortages and production delays.

This podcast explores how digitizing the manufacturing supply chain, from raw materials to delivery, can revolutionize your operations. We discuss how AI, real-time data analysis, and other technologies can optimize performance, unlock valuable insights, and shape the future of supply chain management.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guests: iProd and Relimetrics

Our guests this episode are Stefano Linari, CEO of iProd, a manufacturing optimization platform provider; and Kemal Levi, Founder and CEO of Relimetrics, a machine vision solution provider.

iProd is an Italian startup founded in 2019. There, Stefano focuses on creating software as a service solutions for manufacturing companies of all sizes.

Relimetrics was first established in 2013. At the company, Kemal leads a global team committed to the Industry 4.0 movement and transforming how manufacturers design and build products.

Podcast Topics

Stefano and Kemal answer our questions about:

  • 2:41 – Manufacturing supply chain pain points
  • 4:47 – Supply chain areas ripe for digitization
  • 7:49 – Technologies optimizing supply chain efficiency
  • 11:14 – AI’s role in modernizing the supply chain
  • 12:59 – Real-world manufacturing supply chain efforts
  • 23:00 – The value of leveraging technology partnerships
  • 27:57 – The future of the supply chain from end to end
  • 30:18 – How AI is going to continue to evolve this space

Related Content

To learn more about the manufacturing supply chain, read AI Boosts Supply Chain Efficiency and Profits and  Unified Data Infrastructure = Smart Factory Solutions. For the latest innovations from iProd, follow them on LinkedIn. For the latest from Relimetrics, follow them on Twitter/X at @relimetrics and on LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to “insight.tech Talk,” formerly known as IoT Chat but with the same high-quality conversations around IoT technology trends and the latest innovations. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re going to explore digitizing the manufacturing supply chain with experts from Relimetrics and iProd, but as always before we get started, let’s get to know our guests. We’ll start with Kemal from Relimetrics first. Please tell us about yourself and your company.

Kemal Levi: Hi, I am Kemal Levi, Founder and CEO for Relimetrics. We enable customers with a proven, industrial-grade product suite they can easily use to control and automate quality assurance processes across use cases with no code. And using our product our customers are able to build, deploy, and maintain mission-critical AI applications on their own, in conjunction with any hardware. This can be done both on-prem or in the cloud, and a key industry challenge that our product repeatedly succeeds in tackling is our ability to adapt to high production variability, which is commonly experienced in today’s manufacturing.

Christina Cardoza: Great. Looking forward to getting into that and how that is going to impact the supply chain or bring benefits to the supply chain. But before we get there, Stefano Linari from iProd, please tell us about yourself and the company.

Stefano Linari: Hello, I am Stefano, Stefano Linari. I am the Founder and CEO of iProd. iProd is an Italian startup founded in 2019 to create the first holistic tool designed for manufacturing companies of each size, accessible for free, and as a software as a service. Our user can leave tons of purely integrated software like ERP, Amira, CRREM, IoT platform and use just one modern cloud platform, our platform.

Christina Cardoza: Awesome. So, I wanted to start off the conversation just getting the state of things right now. Obviously a couple of years ago the supply chain was headlining in the news almost every day for weeks on end—just the challenges and the obstacles. But I feel like there’s been a lot of integration and advancements in the technology space, that those pain points we were feeling a couple of years ago we have been able to get over a little bit.

But I’m curious what challenges still remain or where are the pain points today. Stefano, if you want to talk a little bit about what’s going on at the manufacturing and supply chain level.

Stefano Linari: Yeah. This supply chain unfortunately is still purely integrated, especially for its more-than-medium enterprises where digital tools are not updated and easy to be integrated because they are legacy technology. We are far away from the concept of this so-called manufacturing as a service, where the manufacturing capabilities are accessible in a fluid way. This part of the ask for a highly integrated, multi-tier supply chain, able to digitally orchestrate and provide a custom-made piece optimizing cost, impact, and user resources.

Unfortunately, even on the other side of this supply chain, if you look at the OEM we face other issues. And the companies are not able to serve the new part of this for their industry that is the machine customer, where a product, digital product, is able to purchase autonomously spare parts and accessories from the OEM itself and even from third parties. For example, a turning machine that after digitalization can work a belt or a gear after several number of working hours. This is still far away from the reality.

Christina Cardoza: Yeah, you make some great points there Stefano, and one thing I want to discuss a little further is you mentioned a lot of the problem is that there’s still legacy systems in place, and I’m sure that’s creating a lot of silos that these machines can’t talk to each other. Data is not end to end.

So, Kemal, I’m curious from your perspective where are some areas that manufacturers can start digitizing aspects of the supply chain and how that’s going to help address some of the pain points Stefano just mentioned?

Kemal Levi: First of all, digitizing aspects of manufacturing helps to trace quality across the supply chain. As parts move along the supply chain, quality automation helps to identify anomalies before they get to the customer and risk downtime. So for the entire supply chain, and particularly for the OEMs, it is really important to trace the quality status of parts or products from a multitude of suppliers and also run data analytics to see which one is actually performing better and read out those vendors who are not performing well.

Now, digitizing aspects of manufacturing also helps to improve the bottom line. So as manufacturers ship products to their customers, they must identify issues with outbound transportation and logistics. So a magnifying lens looking at different points of the supply chain gives better visibility to improve margins, and in the case of the sectors that we typically serve to margins are often razor thin. So maximizing the number of items getting to the end of the manufacturing line that meet the required quality standards has a direct impact on the bottom line.

Another example is that digitizing aspects of manufacturing, helping to make better supply chain decisions and correlation of acquired data across the product life cycle—and this can be all the way from manufacturing to sales to service—enables continuous business intelligence. And a company that can trace quality in real time and do a better assessment on where quality issues originate can ultimately boost profitability.

Christina Cardoza: Yeah, absolutely. I’m glad you mentioned the quality-automation aspect of the supply chain. I feel like sometimes when we talk about supply chain challenges, we are often thinking about deliveries and shipments and getting manufacturing production out the door. But it also starts—it’s an end to end issue—it starts on the factory floor; it starts as you are developing these products, making sure that everything is high quality, that it can go out the door and can be delivered on time. So that’s a great point that you made, and then looking at the different points of the supply chain so that it’s really an end-to-end experience.

Stefano, I’m curious, as we look at quality automation and all of the different parts manufacturers need to be on top of in order to have this end-to-end digitized supply chains, what are the technologies that are being used? Or how can we start enhancing and optimizing supply chain efficiency?

Stefano Linari: From our side, all these things can start from the demand side. If we start to build intelligent machines that can be transformed in a machine customer, we can create a more predictable demand. We can avoid to rush, to produce spare parts and install it in a non-planned way, creating a simple condition to optimize the supply chain. So from our side in these months, in the last year, we are pushing this new part upgrade inside OEMs.

What we have created to support the OEM to handle a new generation of machines that we call “machine customer,” it’s to create a free and self-service interface in the cloud where each OEM can create their rules and their identity—the digital twin of every machine that’s built in a few minutes. Gartner in their last books name it, “When Machines Become Customer,” recognizes our platform as the first machine-customer-enabling platform in the world.

We are then creating the condition to digitalize the supply chain. Because when you speak about potential saving, entrepreneurs are interested, but they are engaged when you tell them about increasing revenue. And with our technology embedding new intelligence on board of the machine, we are transforming our production tool in point of sales. And this is a remarkable shift in the mindset of the OEM that can be easily understandable.

Christina Cardoza: So I’m curious, because we were talking about the legacy systems earlier in the conversation, is this a software approach that we can take to digitizing the supply chain? Or does there have to be investments in new hardware? Or can we leverage existing infrastructure?

Stefano Linari: We have to combine both, because for sure software platforms can make the interface and user experience simple, but we can’t forget that manufacturing tools and equipment, automatic warehouse and production machines are not yet intelligent enough to analyze their needs and try to simply the life of the end user and to the OEM. So we need a combined approach at the moment.

Christina Cardoza: Great. And of course when we are talking about adding intelligence and doing things like quality automation, AI comes to mind. AI seems to be everywhere these days. Kemal, you mentioned you were—you have an AI approach to being able to provide that quality automation and look at different parts of the supply chain. So I’m curious, from your perspective, what is the role that AI should be playing in these supply chain processes?

Kemal Levi: Well, AI in supply chains can deliver powerful optimization capabilities required for more accurate supply chain–inventory management. It can also help to improve demand forecasting, reduce supply chain costs, and this can all happen all while fostering safer working conditions across the entire supply chain. Traditionally the supply chain has relied on manual inspections and sorting.

So I would like to give an example that centers around smart inventory management. So this, this process—the inventory management process—can be labor intensive and prone to error, adding costs to the loss. So today AI-driven quality-automation tools like ReliVision can be deployed without requiring any programming skills or prior machine learning knowledge, and they can offer access to real-time information that can improve efficiency and visibility. Now, similarly, AI can also be used in conjunction with computer vision and surveyance cameras to monitor work efficiency and safety objectively, and provide data-driven insights for businesses to optimize workflows and improve their productivity.

Christina Cardoza: So do you have any customer examples? I know you just provided the inventory use case, but I’m curious if you have any customer examples that you can share with us: how, what problems they were facing, and how Relimetrics came in and was able to help them and what the results were.

Kemal Levi: A good example is renewable energy leaders which engaged with us to help them inspect their wind turbine blades before they’re released to customers. So, using our AI-based quality-automation and non-destructive inspection-digitization platform, our customer is today able to automate the inspection of phased array ultrasonic data and assess the condition of blades before they are placed in the field.

And the main challenge that our typical customer has is to digitize inspections, which is time-consuming and prone to errors, and improve traceability across their supply chains. And with our product our customers can rapidly implement AI-based machine vision algorithms on their shop floor, and they don’t need to write a single line of code while doing this, and they can share, train the models across inspection points and leverage existing camera hardware, irrespective of image modality. Whether it’s infrared, X-ray, or PAUT.

Christina Cardoza: I love the no-code approach that you guys are taking, because I know a lot of manufacturers, they see these benefits, they want to achieve them, but there’s obviously labor shortages happening in the area in their space, and they can’t always have the skills or be able to deploy these as fast but they’d like to get these benefits. So, love seeing how we can make it more accessible.

When you have these no-code solutions, who are the type of users that are able to implement some of these in practice? Do you need those engineers? Or is it really an operator or a manufacturing manager that’s able to take part in this as well?

Kemal Levi: Well, we would like to enable process engineers to be able to build AI solutions, and not only build but also deploy these AI solutions and then maintain them. So what we see is that maintenance of AI solutions can also be quite costly. So we are making it possible for non-AI engineers to be able to maintain AI solutions.

Now we can of course also serve AI engineers as well; we can help them just prototype their AI solutions faster and deploy them to the field. The maintenance piece, again, is typically an important aspect that AI engineers typically would like to transition to operators after they are successful in the field. And this is exactly what we do: we make it possible for maintenance of AI models and training of new AI algorithms for new products, new configurations to be done by non-AI folks.

Christina Cardoza: Yeah. It’s amazing to see how far technology has come, and how non-AI folks can be involved—especially since these people are the ones on the factory floor with the domain intelligence, so they can spot the quality issues or be able to train some of these models better than an AI engineer probably would if they don’t have that deep manufacturing experience.

Stefano, I’m curious, from iProd’s side, what are the solutions and products that you guys have on the market that are helping your customers in these different areas? And if you had any customer examples that you could share with us as well.

Stefano Linari: Yeah, we have several use cases of machine customers spreading from concrete industry, industrial filtration, and manufacturing. But I want to present you the most significant case that was done with Bilia. Bilia SPA is the third-largest turning center builder, and their machines are sold to automotive companies and manufacturers of consumer goods and a lot of industry where metal parts are needed.

Most of those machines you can figure out to be installed in a shop floor, even in small and medium enterprises—you know that in Italy, but in Europe in general, most of the companies are under nine employees. So you can imagine that no expertise in IT can be found in the customer side especially.

So we have enriched, equipped, this turning machine with an external brain so we can go—it’s in a panel PC, technically speaking, but we like to describe it as an IoT tablet to make them more friendly for the end user—and with this tablet we have two connections at the same time: One with the CNC of the machines, and then we can acquire real-time data about usage and consumption of resources. And on the other connections, usually Wi-Fi or forward dealing, we are connected to the iProd cloud.

This solution—it’s a bundled solution, because we have to provide security and trust to the end user that no sensible data about their process and their secret sauce to create the perfect piece are not exfiltrated. Then, in the cloud, Bilia—the manufacturer—with their process engineer and maintenance engineers, using a visual approach as Kemal defined before, so even in this case, no programmer, no coder is needed, but you have a wizard in the cloud where you can simply drag and drop spare parts and services from the Bilia catalog to conditions that can be simple rules: every 1,000 hours, please change the filters or fill the oil. Or forward-looking AI and ML that can predict more accurately what must be changed.

The main point when we start this project is, “Okay, but why does the end customer have to accept that the turning machine will ask him to buy something? I have spent €200,000 for this turn, and every day he asks for more money? Why do I have to pay?” And so it was a bit scary, but the customer not only accepted the recommendation, but they ask the machine for more. They allocated a dedicated budget to the machine itself—usually in the order of €200 per month, no big budget, but in the most efficient area.  Because under this level the machine can automatically place the order, and you receive a notification on your mobile: “Hey Stefano, in a couple of days you will receive the new filter.” Or new belt, and so on. For €50, €60, because most of the spare parts are cheap. But we try to estimate the cost of placing the order and processing the order, and this is never lower than €50 for each side.

So the end user knows that if the machine never stopped and by autonomously the needed the spare parts, consumable, periodic service, he is saving money. And probably the same items purchased in an autonomous way are even cheaper, because on the other side I have to spend time to answer an email, answer phone, send a contract, and blah blah blah. So what was something that at the beginning sounds very difficult to do because the no skill, not very digital guys—it’s a real market success.

Christina Cardoza: Yeah. And I’m sure that is a common scenario in the industry: not knowing where to start, being worrisome of getting started, how much it’s going to cost, how complicated it’s going to be, if it’s going to be wasted effort. So it’s great to see how manufacturers can partner with companies like iProd and Relimetrics to be able to integrate some of this and really make improvements in the supply chain.

One thing that comes to mind—and I should mention, insight.tech, we are sponsored by Intel—but we’re talking about artificial intelligence and the cloud and real-time capabilities and insights into some of these things that I’m sure that you guys are working with other partners to make this all happen end to end, much like your customers. Sometimes we need to rely on expertise from other areas.

So, curious about how you’re working with partners like Intel, and what the value of that and their technology is. Kemal, I can start with you on that one.

Kemal Levi: In our implementations we are taking advantage of Intel processors and Intel hardware such as Intel® Movidius vision processing units, and we are also often relying on Intel software such as OpenVINO to optimize deep learning models for real-time inferencing at the edge.

Now in the case of quality automation or digitizing visual inspections, customers are very sensitive about computing hardware costs, and they really do care quite a bit about smart utilization of CPU, so we use the Intel OpenVINO toolkit to minimize the burden. And also as an Intel market-ready solution provider we have access to a large community of potential buyers of our product.

Christina Cardoza: Great. We always love hearing about OpenVINO. That is a big toolkit in the AI space, like you mentioned, taking some of the burden off of engineers and just being able to easily run it once and deploy it on many different hardwares. So it’s great to hear.

Stefano, I’m curious from iProd’s end, how are you guys working with partners like Intel, and what are the areas that their technology really helps the iProd solution be able to benefit customers?

Stefano Linari: At the moment we use widely Intel-embedded mobile processors, because even if we haven’t done a heavy workload on AI and ML, what our customers want is for sure to reduce energy consumption at the edge. You have to consider that each IoT tablet is installed on top of each production machine and in a harsh environment, so we need a fanless processor with high computing power and low consumption for standby.

We also use Intel connectivity for Wi-Fi, because we need connectivity that can be reliable in EMC, in difficult spaces where you have welding machines and robots with high power, and this is what we are using now. OpenVINO and new processor with the Ultra core—Ultra is also in our ladder. We are starting to experiment with these features to accelerate especially ML and AI models to predict the usage, because we combine in the tablet—I didn’t tell before—not only IoT data that comes in from the CMCs, but from the cloud we receive even a schedule of the next batch to produce.

And what we are trying to do is to forecast the production, because you have to combine how many working hours this model will do if I will win this deal. Most of the calculations have to be done on the edge, because the customer doesn’t want to move outside their company sensitive information. For the manufacturer for example that produces a piece for the aerospace industry or high-end machines—a supercar like Ferrari, just to name a brand—their technology that is inside your software of the CMT machines, it’s all, half of the value of your company, and you don’t want to transmit even to iProd this information; you want to process all the information on the edge.

Christina Cardoza: Yeah, absolutely. One thing I love about these processors and toolkits is that this technology, it seems to be advancing super fast every day. Some things that a month ago that we were interested in are now becoming reality, and manufacturers, sometimes they have trouble keeping up with all of the advances and getting all of the benefits. But with partners like Intel and these processors they’re really making new changes every day to ensure that we can continue to keep up with the pace of innovation.

I’m curious, Stefano, how else do you think this space is going to change? What do we have to look forward to for the future of the supply chain?

Stefano Linari: I agree with even what Kemal told before: what we see, it’s a digital continuum—from the machines to the OEM to the supplier of the OEM to create a continuum of information. Because we don’t want to spend time in the order process. This is the piece that is considered a loss of time, and Amazon and other online stores are driving the user experience. Because now B2B requests inspire and are driven by B2C experience in the day-by-day life.

The second main point that is pushing the digitalization and will became mandatory in the next few years, at least in Europe, will be the ESG regulation and the so-called Supply Chain Act. So a company in 2026 has to present the ESG report, so they have to account for the emissions that each process in the company generates, and the main focus is on the manufacturing side obviously. And with the Supply Chain Act you have to provide this information, not only through the ESG report to the public, but you have to share points and data with your customers in real time or near real time. This means that the supply chain must be heavily integrated in the next few years.

Christina Cardoza: Great point. And you mentioned sustainability earlier, where we were talking about how some of these things can help worker safety. There are so many different areas that we can talk about, and we’ve only scratched the surface in this conversation.

Unfortunately we are running out of time. So, before we go, Kemal, I just want to throw it back to you one last time. If there’s any final thoughts or key takeaways you want to add—what we can expect from the future of supply chain management? Or how else AI is going to continue to evolve in this space?

Kemal Levi: Well, I think, as I said before, there will be a lot of focus on real-time data analytics and correlating acquired data across the product life cycle. And this goes all the way from manufacturing to sales, to service, to overall enable continuous business intelligence and help to derive better supply chain decisions.

And I think looking to the future companies will strengthen demand planning and inventory management in tandem with their suppliers. There will be data visibility at all levels, whether it’s from in-house manufacturing, suppliers and logistic partners, or customers and distribution centers. The supply chain will no longer be driven by uncertainty in demand and execution capabilities, and overall it will be characterized by continuous collaboration and flow of information.

Christina Cardoza: Well, I can’t wait to see how that all starts to shape out over the next couple of years, and how Relimetrics and iProd, how the advancements and innovations you guys continue to make in this space. So I invite all of our listeners to visit iProd and Relimetrics’ websites. See how they can help you digitize the supply chain from end to end and really get that continuum of information in all aspects of your business.

And also visit insight.tech, where we will continue to keep up with iProd and Relimetrics and highlight the innovations that are happening in this space. Until next time, this has been “insight.tech Talk.” Thanks for joining us.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Edge AI Paves the Way to Seaport Management

With the majority of international trade goods shipped by sea, ports are vital engines of business and economic growth. But as populations surge, emerging economies develop, and the volume of global trade increases, seaports face serious challenges.

“Port authorities today are struggling to manage vehicle traffic in and around ports, leading to inefficiency and delays,” says Sim Tiong Yan, Business Development Manager at Gamma Solution SDN BHD, a provider of smart city solutions. “Worker safety and port security are also major concerns.”

Ironically, the most significant port traffic challenges don’t involve ships but rather land vehicles that transport cargo. That may come as a surprise, but there are several reasons why ground traffic is so problematic in port areas.

Every truck arriving at a port must first check in with the port authority. The registration process is usually manual and can be quite slow—resulting in long lines of vehicles waiting to check in and creating traffic bottlenecks. In addition, drivers sometimes disobey port traffic regulations: stopping in no-parking zones, speeding, driving the wrong direction on a one-way route, or staying longer than their allotted time. This can interfere with operations and further slow the flow of traffic into and out of the port.

Further, the ongoing issue of port backups has caused environmental concerns, making it imperative to come up with innovative port management solutions.

#SmartCity solutions based on #EdgeAI and #ComputerVision help manage port traffic more effectively while also improving port safety and security. Gamma Solution SDN BHD via @insightdottech

Port Traffic Management Challenges and Solutions

The good news is that smart city solutions based on edge AI and computer vision help manage port traffic more effectively while also improving port safety and security. Built on flexible, modular edge hardware, these solutions can be deployed to ports all around the world and customized to suit local needs.

The Gamma TITANUS EYEoT solution, for example, employs optical character recognition (OCR) to streamline vehicle check-in by automatically registering each vehicle’s license plate at entry and capturing cargo container codes that truck drivers will need. Computer vision helps detect illegal parking, traffic violations, and measures the total time each vehicle has spent in the port. If a problem is detected, an official receives an alert so they can take corrective action.

Edge AI Offers Safety, Security, and Equipment Monitoring

Gamma’s solution helps solve key safety challenges facing port managers, such as detecting hard hats and reflective vests—helping ensure that workers comply with proper procedures. The system’s AI object recognition algorithm can also differentiate between humans and vehicles, able to send warnings to port operators if a person wanders into a vehicle-only zone—or if a truck goes through a pedestrian area.

In addition, the system contains equipment-monitoring capabilities for sensitive and potentially hazardous machinery. For example, ports often house chemical facilities, where tanks are carefully monitored to ensure that they do not exceed the safe temperature range, as an overheated tank could result in a fire or explosion. The TITANUS system uses thermal cameras and AI analytics to measure tank temperature, alerting a safety officer if danger is detected.

Combining cameras and AI also delivers more effective port security. The Gamma computer vision-based intrusion detection module can identify an unauthorized person trying to sneak into the port—but won’t create a false alarm if an object lands on the perimeter fencing. Biometric technology enables tiered access to sub-areas within the port. For example, an IT technician might be granted access to office areas, but not industrial zones.

ASEAN Case Study Highlights Potential for Customization

A good example of smart city solutions comes from Gamma’s custom deployment at a port in South Asia. A port operator had several safety and efficiency problems they wanted to solve. Gamma’s engineers proposed three possible implementation approaches:

  • Run the system on edge AI boxes and AI cameras, with all processing and automation performed right at the edge.
  • Connect standard IP cameras to a back-end server, with AI analysis and decision-making handled on the server.
  • Adopt a hybrid approach, using IP cameras with an edge AI box to perform some of the AI analytics workload at the edge while determining automated response actions via the back-end server.

In the end, the hybrid option was selected to provide the best balance of cost and performance. Port operators saw a significant improvement in traffic flow at the vehicle check-in counter. They also resolved the longstanding safety issue of dock workers repeatedly entering a potentially hazardous area. In the year prior to implementing the solution, the port had experienced more than 50 cases of worker violations by entering the restricted area. After the solution implementation, the number of incidents has fallen to zero.

Gamma’s technology partnership with Intel helped bring the solution to market—and made it easier to offer flexible deployment options. “Intel engineers helped us to optimize our AI models and offered benchmarking tools that allowed us to select the exact hardware specifications we needed for our deployment,” says Yan. “The benchmarking support on hardware performance has been a real help in winning over customers, because we can enable them to control costs and build tailor-made solutions based on their needs.”

Smarter, More Sustainable Cities

The world’s environmental and shipping challenges will become more critical in the years ahead. Scalable, customizable solutions that improve efficiency at ports will likely be of great interest to port authorities, city managers, and systems integrators (SIs).

The flexibility of these solutions holds another benefit for governments and SIs, because they are based on technologies that can easily be repurposed for other smart city use cases.

“There’s plenty of overlap between a smart port management solution and use cases in smart cities, manufacturing, and logistics,” says Yan. “These systems can also be used to ensure security at warehouses, improve worker safety in factories, or manage traffic flow in communities—making our cities smarter, safer, and more sustainable.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

AI Advances Convergence of Cyber and Physical Security

The advancement of AI technology is driving a transformational shift and impacting every industry, including the security industry. As we navigate the changes and opportunities, our approach and practices will need to change with them. To help us navigate this new world, we talk to Kasia Hanson, Global Sr. Director, Physical and Cybersecurity Ecosystems at Intel.

Hanson is an influencer on the forefront of the global security industry. In 2024 she was included for the third time by the Security Industry Association on the Women in Security Forum “Power 100” List for advancing diversity, inclusion, innovation, and leadership in the community. Her work at Intel is all about helping the ecosystem grow, advance, and leverage the latest security technologies by creating an advanced portfolio of solutions with Intel’s ecosystem of partners. Kasia also advises integrators and security practitioners on converged practices and AI in security. We talk about the changing dynamics in the world of physical security, including the convergence of physical and cybersecurity, and Intel’s role in helping customers and partners overcome the challenges today and capitalize on opportunities in the future.

Let’s start out by talking about the convergence of physical security and cybersecurity.

As the threat landscape continues to grow, AI is a tool for security teams to detect, respond, and mitigate threats, but the bad actors are also using AI to perpetrate attacks. As AI permeates all aspects of our world, threats continue to get more and more sophisticated, and we must protect both physical and digital assets.

The broad adoption of the Internet of Things (IoT) and the Industrial Internet of Things (IIOT) has created an interconnected ecosystem of physical and cyber systems, blurring the lines between physical and cyber. Security threats are evolving and going lower in the stack. They are aimed at physical vulnerabilities. As physical security and cybersecurity are increasingly interrelated, it’s no longer viable to separate cybersecurity and physical security policies and practices.

With the landscape and threats evolving quickly, we aim to arm security practitioners with tools to create layers of defense—whether it’s integrated silicon security and product assurance or advising on holistic security practices and solutions. Our goal is to help the defenders defend.

What big security challenges do organizations face?

Security organizations are faced with many challenges from ransomware, insider threats, malware and viruses, supply chain attacks, data breaches, unauthorized attacks and intrusions, physical sabotage, tailgating and social engineering, facility breaches, device tampering, and environment (fire, weather). Both the CSO and CISO are charged with protecting all facets of their organizations, so formal collaboration between the physical and cybersecurity teams is critical to improve efficiency and resiliency and achieve a greater return on their security investments.

As new AI and computer vision technologies are developed and deployed to combat security threats, how are organizations complying with privacy regulations and laws?

There are a couple of areas to this. The first is the ethical development of AI. This should be the number-one priority in the development and use of AI in any scenario. We all play a role to ensure that AI is being developed in an ethical and equitable way with trustworthy systems. I invite you to read more about Intel’s responsible AI policies and approach.

To help security practitioners protect data and privacy, Intel builds security features into our hardware and software, so data can be protected and compliant with privacy laws such as GDPR in Europe and industry-specific regulatory requirements like healthcare and financial services. Confidential computing can help practitioners protect data and stay compliant with regulatory requirements. For example, Intel® Software Guard Extensions (Intel® SGX) unlocks new opportunities for business collaboration and insights—even with sensitive or regulated data. Intel SGX is the most researched and updated confidential computing technology in data centers on the market today, and with the smallest trust boundary.

And Intel® Trust Domain Extensions (Intel® TDX), helps to increase confidentiality at the VM level, enhance privacy, and gain control over your data. It enables isolation of the guest OS and VM applications, which removes access from the cloud host, hypervisor, and other VMs on the platform.

What are some examples of the types of partners you work with?

Intel has an extensive ecosystem, including ODM, OEM, ISV, and systems integrator partners delivering innovative solutions that help security practitioners add layers of defense and deliver new business value. We work with the ecosystem to bring innovative software capabilities that can leverage both hardware and software and provide new outcomes in a more secure way.

Software has created an opportunity for the market to offer more advanced business outcomes, lower total cost of ownership, and accelerate time to market. We work with ISVs to help them develop AI and computer vison capabilities using the Intel® OpenVINO toolkit and the Intel® Geti platform model training at the edge. Then there’s Intel® SceneScape, a new software tool enabling vision-based AI to have spatial awareness from sensor data and provide live updates to a 4D digital twin of your physical space.

The security ecosystem serves many different verticals, and we work with the ecosystem to deliver optimized solutions that serve markets such as retail, manufacturing, education, and healthcare. Genetec, for example, serves education, cities, government, entertainment venues, and commercial businesses. Its Genetec Security Center is an open-architecture platform that unifies security systems, sensors, and data into a single interface. This includes IP-based video, access control, automatic license plate recognition (ALPR), intrusion detection, intercoms, and more. We work closely with them to optimize their software and hardware with Intel technology, accelerating new business outcomes for security practitioners.

Another partner we work with is Axis Communications, one of the leading camera vendors in the world. We can leverage their cameras with Intel SceneScape for scene intelligence and move beyond traditional vision-based AI. This leads to realizing spatial awareness from sensor data and into a 4D digital twin—creating new opportunities for security practitioners. We also work with AI ISVs like EPIC iO, which delivers advanced analytics use cases. We’ve helped them optimize their software capabilities with OpenVINO as well as validated the company’s solutions on Intel-based Dell hardware. Working hand in hand with them enables us to deliver new business outcomes at the edge using advanced capabilities.

We also work with the cybersecurity ecosystem to develop solutions on Intel platforms with software optimization. Check out the latest Cybersecurity Ecosystem Catalog to see how we are working with partners like CrowdStrike to protect endpoints leveraging Intel Threat Detection Technology.

In closing, is there anything else you would like to add?

The cyber and physical security landscape is changing faster than ever. When I advise our partner ecosystem on AI and security technologies, I always reference being on a journey together. Intel is uniquely positioned to lead the technology industry in a security evolution due to our vast product portfolio and end-to-end ownership in product development. We believe that system trust is rooted in security—if hardware isn’t secure, then a system cannot be secure. That’s why our goal is to build the most secure hardware on the planet, enabled by software—and we’ve made unparalleled investments in people, processes, and products to meet this goal.

According to ABI Research, Intel leads the silicon industry in product security assurance. I invite anyone making security product decisions to review the latest ABI Research report: Embracing Security as a Core Component of the Tech You Buy and the Intel 2023 Product Security Report.

Additional resources:

Intel’s Cybersecurity Ecosystem Partners

Physical and Cyber Convergence in the latest eBook from Intel and Credo Cyber Consulting

The key role AI and other technologies play in both physical and cybersecurity in Kasia’s article published in the Influencers Edition of the Security Journal Americas.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech

Fair and Transparent Assessments with AI Proctoring

A traditional classroom exam requires supervision from an educator or test proctor to ensure integrity. For online education, remote-proctoring software fills that need. It works by recording test-takers via webcam and using remote human proctors or an AI algorithm to monitor their activity. Secure, remote proctoring provides the watchful eyes needed to maintain academic integrity for both educators and students—even when they’re not in the same room.

Online assessments are particularly vulnerable to security issues and academic dishonesty. To combat these challenges, educators need digital tools that enable virtual proctoring to help evaluate students’ learning outcomes while maintaining integrity. Online proctoring can use AI, software, a live human proctor, or any combination.

“Proctoring ensures that assessments are conducted in a fair and transparent manner,” says Deepak MK, Vice President Data Science at ExamRoom.AI, an AI EdTech company. “By monitoring test-takers, proctoring helps uphold the integrity of educational and professional credentials.”

The AI Proctoring Process

ExamRoom.AI provides schools and organizations with a comprehensive platform streamlined and highly secure to proctor exams around the world. Educators can deliver assessments and keep track of student outcomes through a learning management system (LMS) and web-based platform for proctoring.

Test-takers login and participate via webcam while a human proctor takes them through an identification verification process. The platform restricts test-takers from tampering with webcams, copying and pasting text, and screen sharing. “Beyond those basics, we’ve developed secure algorithms to control hardware and software kernels, along with biometric monitoring such as fingerprinting, facial scanning, and voice recognition to further protect against cheating,” MK says.

Because many schools and enterprises use other edtech tools, the platform integrates with popular LMS platforms, including Blackboard and Canvas. ExamRoom.AI also works with customers to customize APIs and the platform’s user interface to white-label for customer branding—for example, adjusting details such as font sizes and logos.

Individual Privacy: A Platform Fundamental

Personal privacy is, of course, a significant concern for both students and educational institutions. ExamRoom.AI adheres to relevant data protection regulations, such as GDPR, COPA, FERPA, ISO27001, ISO 9001, and SOC II, depending on the jurisdiction and the nature of the data being processed. Alongside regulatory compliance, the platform protects individuals’ information in several ways:

  • All data transmitted through ExamRoom.AI is encrypted to ensure sensitive information remains secure during transmission.
  • Users are informed about data collection practices and provide explicit consent before any data is collected or processed.
  • Wherever possible, personal data is anonymized to prevent direct identification of individuals.
  • Strict controls limit access to personal data only to authorized personnel who require it for valid purposes.
  • The platform collects and processes only the minimum amount of personal data necessary to provide its services.
  • As an ISO-certified company, it undergoes regular audits and assessments to identify and address any potential vulnerabilities or compliance issues related to data privacy.
  • Users are provided with clear information about how their data is being used, including purposes, recipients, and retention periods, promoting transparency and trust.

Enabling Accessibility and Personalized Learning

One of the biggest challenges in today’s education climate is adapting to changing technologies and methodologies while ensuring that materials are accessible to all learners. ExamRoom.AI addresses this hurdle by providing a user-friendly experience for students and enabling educators to deliver multimodal content and adaptive learning paths. For example, an online assessment might give different questions to different students, depending on how they answered the previous question. “Our tools help educators accommodate different learning styles and abilities,” MK says. “They can also use the system to gather data that helps overcome learning gaps.”

“We empower #educators to tailor their #teaching approaches to foster critical thinking, problem-solving, creativity, and collaboration—the skills essential for success in the 21st century” – Deepak MK, @examroomai via @insightdottech

Students, educators, and businesses are under pressure to prepare students and employees for a rapidly changing job market as technology plays a dominant role in society. “ExamRoom.AI has tools for skills assessment, career guidance, and professional development,” MK says. “We empower educators to tailor their teaching approaches to foster critical thinking, problem-solving, creativity, and collaboration—the skills essential for success in the 21st century.”

Building for the Future

Intel technology plays a crucial role in the ExamRoom.AI solution, including state-of-the-art GPU hardware that enhances the speed and efficiency of the AI model training and inference processes. The company collaborates with Intel to optimize various machine learning models, including those for object detection, semantic search, tag generation, and translation. The partnership with Intel has been instrumental in fine-tuning these models to improve performance, efficiency, and accuracy.

As education continues to embrace AI-powered tools such as tailored feedback and adaptive learning paths, MK looks forward to continuing the collaboration with Intel: “Virtual proctoring and remote assessment solutions will keep evolving to ensure integrity in online testing environments. Supported by Intel, we’ll continue to retrain and refine our AI models with extensive data sets to make sure they stay effective and relevant.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

AI Everywhere—From the Network Edge to the Cloud

At a recent launch event, Intel CEO Pat Gelsinger introduced not just new products but the concept of “AI Everywhere”. In presenting the 5th Gen Intel® Xeon® processors and Intel® Core Ultra processors, Gelsinger talked about how Intel is working to bring AI workloads to the data center, the cloud, and the edge.

Now, in a conversation with Gary Gumanow, Sales Enablement Manager – North American Channel for Intel® Ethernet Products, we learn more about the idea of AI Everywhere and the role of the network edge. Gary has spent his career in networking, which may be why he’s also known as “Gary Gigabit.” With a background in systems integration at some of the top law firms in New York City, Gary works closely with Intel distributors and solution providers. Gary says understanding the technology, customer needs, and how products get moved through the channel are near and dear to his heart.

When Intel talks about AI Everywhere—from the data center to the edge device, what does that mean in terms of the network edge?

AI Everywhere means from the edge to the network core to the data center. By the edge, we’re talking about the endpoints: sensors, cameras, servers, PCs, adapters—the devices that connect to the network. And the core refers to the components that provide services to the edge. AI in the data center is nothing new and has the power and storage to handle big AI loads. But inferencing at the edge is new. And there are a number of challenges from processing power in compact/rugged PCs to the time-sensitive networks and connectivity needed to transport data back and forth.

And there are several areas that impact the network and how the network is important to those areas. What is AI going to mean to an edge device? The AI model is only as good as the data that can get to it, but how does that data get to an edge device and vice versa, and how does that data get back to the data center?

It’s important that you’re putting the optimal amount of smarts there—right-sizing the architecture so as not to burden the network between the data center. This means running AI Everywhere with the right CPUs while lowering the cost while increasing performance.

We’re continually working on improving bandwidth, improving data security, and confidential computing in our network devices, so that when they go down to the edge, they’re secure, they have low latency, they have the performance that’s required to connect the data center with the edge. And doing it in a way that’s low power and sustainable in terms of its price performance per watt and optimizing the power.

Let’s expand this idea to the factory, where we’ve got AI and computer vision—taking all of this data and inferencing it at the edge. What does the network edge look like here?

Believe it or not, some factory floors are so large they can have their own weather patterns. And one of the things that’s really hot right now for manufacturing and automation is going the distance between robotic devices. So how can these devices communicate when they are football fields apart from each other? And how do you get real-time data out to those edge devices, which are important to the assembly line?

This is a reason why manufacturers are deploying private 5G networks in factories—so that they can communicate from a local server or from a data center, all the way out to these endpoints. But this type of communications takes timing accuracy, low latency, and performance.

So, one cornerstone to 5G virtualized radio access networks (vRANs) is precision timing technology. And global positioning satellite (GPS) devices are key components of a precision timing network. Essentially networks have an atomic clock, which is typically a network appliance, and you have all of your devices synchronized with that appliance. But that’s expensive and proprietary.

The other thing that’s important for 5G is forward error correction (FEC) that is looking forward in the flow and correcting for any errors, so that you’re heading any errors off at the pass—you’ve got the precision timing and you’ve got the forward error correction. All of this can get complicated.

How is Intel making it less complicated to deploy private 5G in factories as one example?

We’ve built these functions directly into our Ethernet products. For example, take the atomic clock technology that’s been appliance-based, and is now integrated into some of our network adapters. You can eliminate those appliances in the network and have the timing accuracy that’s required for 5G networks built in. It saves power, it saves money, and it simplifies the network design because you don’t have to have all of these devices coming back to an atomic clock. It can be out on the nodes where it needs to be. GPS timing synchronization and FEC are other technologies built into our network adapters and devices as well.

We have this progression of shrinking the requirements of discrete components down to a smaller set of things. So now we have Intel® vRAN Boost doing a lot of the work via an accelerator on the 4th Gen Intel® Xeon® processors. This is fully integrated, high-capacity acceleration with vRAN Boost that increases the performance and the calculations that are required to run Ethernet over vRAN. And again, this reduces component requirements, power consumption, and overall system complexity.

It’s like the progression of everything at Intel. It’s consolidating it into the processor or to a smaller number of components and simplifying it and making it easier to deploy. Another example is how Ethernet is finding itself embedded in Intel Xeon D processors. The system-on-chip (SoC) processors have the logic of an Ethernet controller to support 100 gigabits in the actual chip.

It’s sized for a network appliance or edge device versus the cloud data center so it has fewer cores and requires less power. And it’s specialized to handle network flows and network security. The Intel Xeon D processer is “right sized” for where it should be sold and where it should be embedded. You can deploy it in medical sensors, gateways, industrial PCs, the factory floor—all where you need near real-time actionable insights.

Is there anything you would like to add in closing?

We feel very strongly about interoperability with multiple vendors. In fact, in the AI space, we’re doing something called HPN or high-performance networking stacks based on open APIs and open software. We’re working with multiple vendors like Broadcom, Arista, Cisco, and a whole bunch of other ones. There’s the Ultra Ethernet Consortium open to organizations that want to participate in an open ecosystem and support AI in the data center.

My customers are telling me that they like the openness approach that Intel is taking with the industry. This consortium that’s coming about to bring data center Ethernet in an open environment is critical for the industry, for AI to really extend out as far as it can go.

Clearly Ethernet has stood the test of time because its five principles: backwards compatibility, insatiable need for bandwidth, interoperability, open software, and evolving use cases. The network—whether it’s 802.11, Gigabit Ethernet, or 100 Gigabit Ethernet—it’s the fabric that alongside 5G puts this whole story together to bring AI Everywhere—from edge to cloud.

 

Edited by Christina Cardoza, Associate Editorial Director for insight.tech.

QSRs—Voice AI Will Now Take Your Order: With Sodaclick

Join us for our very first episode of “insight.tech Talk” where we discuss how voice AI transforms the QSR experience—boosting efficiency and creating a smoother experience both for customers and employees.

Just as our new name reflects the ever-changing tech landscape, this episode explores how voice assistants enable QSRs to take orders faster and more accurately, reducing staff workload and handling complex requests. The result: shorter lines, happier customers, and more consistent service.

Listen in as we explore benefits, address potential challenges, and peek into how voice AI impacts other areas of the industry.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guests: Sodaclick and Intel

Our guests this episode are Salwa Al-Tahan, Research and Marketing Executive for Sodaclick, a digital content and AI experience provider; and Stevan Dragas, EMEA Digital Signage Segment Manager for Intel. At Sodaclick, Salwa focuses on bringing awareness about the benefits of conversational AI across all industries. Stevan has been with Intel for more than 24 years, where he works to drive development of EMEA digital signage and display benefits of Intel tools and technologies.

Podcast Topics

Salwa and Stevan answer our questions about:

  • 6:02 – How voice AI enhances QSR experiences
  • 12:53 – Voice AI infrastructure and investments
  • 15:08 – Technological advancements making voice AI possible
  • 20:15 – Real-world examples of voice AI in QSRs
  • 24:57 – Voice AI opportunities beyond QSRs

Related Content

To learn more about conversational voice AI, read Talking Up Self-Serve Patient Check-In Kiosks in Healthcare. For the latest innovations from Sodaclick, follow them on Twitter at @sodaclick and on LinkedIn. For the latest innovations from Intel, follow them on Twitter at @Intel and on LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to the “insight.tech Talk.” I’m your host, Christina Cardoza, Editorial Director of insight.tech. And some of our long-term listeners probably have already picked up that we’ve updated our name from the IoT Chat to “insight.tech Talk,” and that’s because, as you know, this technology space is moving incredibly fast, and we wanted to reflect the conversations that we will be having beyond IoT. But don’t worry, you’ll still be getting the same high-quality conversations around IoT technology, trends, and latest innovations. This just allows us to keep up with the pace of the industry.

So, without further ado, I want to get into today’s conversation, in which we’re going to be talking about voice AI in quick-service restaurants with Sodaclick and Intel. So, as always, before we jump into the conversation, let’s get to know our guests. Salwa from Sodaclick, I’ll start with you. What can you tell us about yourself and Sodaclick?

Salwa Al-Tahan: Hi, Christina. Thank you. So I’m Salwa Al-Tahan, Head of Marketing and Research at Sodaclick. Thank you for inviting me to join this podcast. So, Sodaclick is a London-based AI company. We actually started, for those that don’t know, in 2017 as a digital-content platform. But AI was always part of their vision. And in 2019 they actually opened up the AI division, and that was primarily focusing on voice AI, although that was quite linear, it was command-driven. And they always knew that it needed to be more natural, more human-like, more conversational.

So, the co-founders are really hot on being at the forefront of technology, always innovating, always looking to improve. And, with the advent of generative AI, they started fine-tuning their LLM, and that’s where we are now. Now we’re a London-based company with a global presence.

Christina Cardoza: Great! Looking forward to digging into some of that. Especially making voice AI more natural, because I’m sure a lot of people have had the displeasure of those customer service voice AI chatbots that you’re always screaming at on the phone, trying to get it to understand you, or trying to get where you need to go and trying to talk to a human. So, looking forward to how that’s not only being brought into the restaurant space, but I know Sodaclick does things in many other industries. So we’ll dig into that a little bit in our conversation.

But before we get there, Stevan, welcome to the show. What can you tell us about yourself?

Stevan Dragas: That’s interesting. So, Stevan Dragas. Why it’s interesting is because over the last 24 years in Intel, I’ve done so many exciting roles and positions. And on the recent visit, where I had the pleasure of taking Sodaclick to join Intel Vision in U.S., Ibrahim, one of the founders of Sodaclick, actually reminded me that at the time, in 2019, when they moved into the voice, that that was the first time he met me. I kind of, unfortunately, forgot that. But he reminded me that that time we met for the first time, and I kind of gave them some hints, advice, what would work, what did not work. And, unfortunately have to say, I’m almost glad that they listened to me at that time.

Because with what we are doing, what Sodaclick at the moment is doing, we cover both edge, from the edge to the cloud, and driving ultimately the new usage models, driving user experience, driving benefits starting from the end user to retailer to the operator of QSR. But ultimately we are driving new experience in usage models and benefits, and changing the industries.

Now, my role is basically to promote and support Intel platform products, technologies, software, across multiple vertical industries, from which QSR is just one of the vertical industries. So I go horizontal, and I have a number of companies which are just as exciting as Sodaclick, but they’re one of my, let’s say, crown jewels that I am actually pleased and happy that over the last couple of years we really accelerated and will continue.

Specifically because we are now looking into adding some of the new products that Intel actually brought to the market. Not necessarily just the new products that are for the cloud, but also introducing for the first time in the computing industry, to call it, the new products which actually have not just anymore CPU and GPU but also NPU. And in May Sodaclick will be demonstrating and using their product on this new platform. It used to be called Meteor Lake, but it’s actually Core Ultra platform.

And really exciting to work across all of these industries, specifically with Sodaclick. They have been so good, and I’m happy to also say that we are looking to a lot more than just the QSR-type of restaurants, because many industries and vertical industries’ solutions would benefit from some kind of conversational discussion from asking an open question, rather than pre-scripted, menu-driven, type of conversation with the machine.

Salwa Al-Tahan: Yeah, command-driven is so linear and boring. And, like you say, frustrating to customers as well. These natural interactions with the conversational voice AI is definitely the future and the way it is being deployed at the moment.

Christina Cardoza: Yeah, absolutely. So, let’s get into that a little bit. Specifically, looking at the quick-service restaurants: how it is being deployed and used in those areas. Salwa, if you can give us a little bit more what Sodaclick is doing in this space? And how you’re seeing voice AI improve or enhance the QSR experiences?

Salwa Al-Tahan: There’s two aspects to the QSR industry which are benefiting from the integration of voice AI. One is in-store. So, we’re seeing a handful, I would say, of QSR brands actually integrating voice AI into their in-store kiosk to make them truly omnichannel. The other aspect is at the drive-through. So, it becomes the first interaction for a customer as they drive up to the order-taking point—you’ve got your conversational voice AI assistant there. These are the two main focuses at the moment.

And, to be honest, each one comes with its different benefits, I would say. And its different benefits both to the business and to the customer. So, at the in-store kiosk it’s faster. If you think about, if you know exactly what you want going up to a kiosk, having to scroll through the UI, adding extra lettuce, or removing cheese, or these little things—no ice—you actually have to scroll through and it takes time. Whereas it’s faster for you just to say it. And that faster interaction means that you can get a faster throughput as well. You can serve more customers, reduce wait times.

Also, in in-store kiosks, it becomes more inclusive. Having voice AI as an option to customers means that any customers with visual impairments, physical impairments, sensory processing disorders, even the elderly who struggle with accurately touching those touch points to place an order—it becomes much more inclusive to them. They’re able to use their voice for that interaction. So these are some key benefits obviously, as well as upselling opportunities.

At the drive-through it’s a completely different interaction. It’s again polite, it’s friendly, it’s allowing businesses to unify their brand image with excellent customer service. It’s improving that order accuracy. I know from the QSR annual report for their drive-throughs in 2023, order accuracy improved by 1%. It was 85% in 2022, it moved up to 86% in 2023. With voice AI we’re actually able to bring that up to 96%-plus.

And that is because at the order point it’s quite a repetitive task for members of staff. They’re just constantly doing the same thing. That means that sometimes, unfortunately, you’re not getting the friendly customer service, you’re not getting that bubbly person at the end of their shift. Humans are humans, though. They might be having a bad day. They might not have all the product information that you’re after.

Whereas with the conversation voice AI model, we’re able to consistently give polite, friendly customer service—a warm, human-like interaction. We’re actually able to bring in voices that are neural voices, which are so human-like, most people wouldn’t even know that they’re talking to an AI. We’re able to offer it in 96 languages and variants, which means that you are able to serve a wider community within the area as well, without any order inaccuracies of mishearing something, or asking them to repeat themselves. Language is another really big factor, both in-store and at the drive-through.

Stevan Dragas: Salwa, if I may add, it increases—

Salwa Al-Tahan: Of course, please do.

Stevan Dragas: From working very closely with Sodaclick, it also increases greatly the accuracy. It removes the necessity from the operator to necessarily closely listen and try to understand, but the same time reduces the time to delivery, because the moment when you are already having three, four articles listed on the screen, the operator can start already making the order, working on the product, rather than listening for the complete order to be finished. Because the technology is now stepping between, helping both sides.

Salwa Al-Tahan: Absolutely, it’s streamlining operations, both for the business and for the customer. So you’re absolutely right, Stevan, that it’s a benefit to both. And also alleviating that pressure on members of staff as well. So it can all, like you say, it can be stressful as well, inputting all of that information. And although it is repetitive, it can be stressful, especially if you’ve got a large queue. People honking their horns and—they just want their food fast, and that’s what it is all about in the QSR industry, getting your food fast.

So by being able to improve order accuracy, it has that knock-on effect of the other benefits: streamlining it for both the business and the customer, but also increasing speed of service and quality of service as well. And it allows members of staff that, from being taken away from that position at the order point, we’re not actually removing a member of staff, we’re just repurposing them into the kitchen so that they can focus, exactly like you say, on preparing the orders, on other pressing tasks that might be needed in-store. And also improving the quality of customer service.

Christina Cardoza: Yeah. As a customer myself—I guess in preparation to this conversation a little bit, I went through a drive-through, quick service restaurant last night. And I used the app before I left the house and ordered my food, and then went through the drive-through to do it, but I wanted a sandwich with pickles on it and, like you said, I didn’t want to go through the app and figure out how to add pickles to it, but then I also didn’t want to drive through and talk to an employee, because then—just my own thing—I feel embarrassed, or that I’m being a difficult customer, asking for these modifications and customization. So if it was an AI I was talking to, I would’ve been much more comfortable to order the sandwich that I wanted.

And, to your point, it’s that customer experience, but I’m curious—you talked a little bit on the business level, the benefits that they get and how they can redistribute their employees elsewhere. How can we actually implement this voice AI? What is Sodaclick doing to add this on to, maybe, the technology that’s already in there? Or is there investments that have to be made to the infrastructure to start bringing voice AI to the business and to the customers?

Salwa Al-Tahan: So, actually, if a QSR doesn’t already have this—the technology—already, we can work with them and integrate into their existing technology.

Stevan Dragas: So, if I may add to that existing point: customer-interaction points, where they either interact or make purchase orders in existing stores, or even if it’s drive-through, what Sodaclick from the technology side brings is the microphone, which is a cone microphone, which focuses in very noisy environments to the person. And it’s actually doing that with the new algorithms developed with Sodaclick, driving very, very high percent of the accuracy. But not only accuracy of the person, but also recognition on the accents and different words. In the same environment, there could be multiple languages.

From the technology side, they also integrate with the APIs, with the stock of the products, directly integrate with the products available—but not only available, they integrate with the analyzing of existing products. For instance, are they protein-rich? Are they rich in some other minerals or whatever? Again, specifically now talking about QSRs. And from the technical side also, they look into what the existing infrastructure is. Maybe the existing infrastructure is enough. Or maybe they need, so-called, a little bit more horsepower, in which case just the computing part needs to be up-leveled to be able to process all the information in order to drive this near real-time conversational kind of usage model.

Christina Cardoza: Great. And, Stevan, you mentioned some of the Intel technologies that are coming out to help do this more. Because I’m assuming, like you mentioned, there’s a microphone, then we have cameras, there’s algorithms all happening at the backend to make sure that the software can accurately understand what the customer is saying and be able to put that all down and get their order right. So, what are some of the technological advancements coming that make this so that it’s fast, it’s accurate, it’s real time, that it’s natural? How is Intel technology making this all possible and helping companies like Sodaclick bring this to market?

Stevan Dragas: So, there are a couple of things that actually directly play on the technology side. One is effectively physics. In order to drive real-time or near real-life-experience conversational experience of the users, of the customers, decision-making needs to be done at the edge. Processing, running of those LLM models need to be done at the source—at the source, which is the edge-integration point, the communication point.

And Intel has recently introduced—and this is an industry first—new processors which have now three cores. And they are all in the same chip, which is basically CPU as it traditionally was, GPU on top of it, and then NPU. NPU is effectively narrow processing unit, which effectively enables AI decision-making being done at the core, at the edge.

So, the Core UItra platform products are something that are coming out. There are already a number of them available in the market, but they will be even more widespread with driving this AI user experience, conversational AI. On the other hand, there are a number of products which are for the cloud, for the edge, for the server, but ultimately when I said physics, you literally have latency between transmitting data from the point where you make the order, where you conversate, and you don’t really have time.

I don’t know if others are like myself; I am not very patient. Sal, you are laughing because you know me. But ultimately, if you need to say something and then wait for a couple of seconds for that message to be transferred to the data center, or to the cloud, or somewhere away, and then the response needs to come back—normally I go without lunch if there is a queue in the line. But that simply may be me.

But ultimately if you want to have a conversation, conversational AI, that needs to be real, as long as that processing is happening at the edge, and this is what Intel is bringing—bringing not only products, but also as Intel® Tiber edge platform, then OpenVINO framework, which Sodaclick is using. So ultimately not necessarily just doing the technology for the sake of technology, but using the technology to enable usage models, to enable experience, to drive the smile, to drive the repeat return to the either same environment or the similar environments, to literally break out of the box of traditional “read the menu and repeat what is said.” Or if you don’t read, I don’t understand. So this is where basically Sodaclick is coming in with their software solution.

Salwa Al-Tahan: Just like Stevan was mentioning, I think what a lot of brands were doing at the drive-through order point was reducing their menus. But with conversational voice AI you can actually still have that full menu and have your customers interact with that and choose even maybe new favorites, with the opportunities for upsells. And it’s a lot more intuitive as well. And, like Stevan was saying, using OpenVINO, it means that we’re able to create the solution and then scale it across the brand.

Stevan Dragas: Even to add to that, when I mentioned a couple of times all the user experience, imagine if you are basically a return customer. And maybe there is a loyalty program, maybe, I don’t know, some special. And imagine you come back there, and rather than having to go through your three, four, five items, how about the sign says, “Hey, welcome back, Christina!” All clearly because you either tapped your card, so it knows who you are, and it says, “Hey Christina, shall we have the same, like your favorites?” Or something like that.

So it automatically, even for you—oh yeah, I don’t need to go through the pain of repeating everything. It already knows, and it suggests, and as Sal mentioned, maybe it can actually focus on maybe upsell. Say, “Hey, how about would you like to try some new product? Do you want to experiment?” Or ultimately there is even the option of detecting the facial expression and pretty much trying to drive: a happy customer is a good customer, is ultimately buying more.

Christina Cardoza: Yeah, absolutely. To your point too, if the machine can recognize who you are and what your order has been, and there was maybe a limited-time offer or a new menu item that came out that is similar to what you ordered, they can also give those personalized recommendations: “Would you like to try this?” So this all sounds really great and interesting.

Salwa, I’m curious, do you have any customer examples or use cases of this actually in action that you can share with us?

Salwa Al-Tahan: Yeah, absolutely. So, we’ve been working with Oliver’s, which is an Australian brand. They’ve got over 200 stores, both in-stores and drive-throughs. And we’ve deployed the conversation voice AI in their in-store kiosks and also at the drive-through. It’s actually been really, really exciting working with Oliver’s, because they were on a completely new digital transformation journey. So we’ve been with them along that way, including their digital signage.

And what was really cool about Oliver’s is, although it’s English, we’ve been able to create the persona of the AI assistant to be very Australian. And he’s got his own personality: he’s called Ollie. And he understands Australian, the slang words; he’ll sort of greet you with “G’day, mate!” and “Cheers!” And just in a very natural way to the local customers. And that’s been really, really cool.

The other great thing about working with Oliver’s was their requirements, their KPI, was quite different to working with, let’s say, KFC, who we also work with. They actually, because they are a healthy fast food chain, they know their customers are interested to know ingredients lists; they want to know calorie count and product—like Stevan mentioned—protein information and things like that. So we were able to integrate with Prep It, which is their nutritional database, to provide that information for customers in a very quick, accurate, and fast manner. And that’s something else that’s—it’s really cool.

Again, I mentioned, so working with KFC. We’ve been working with KFC in the MENA region, specifically in more locations to come. But we’ve got deployments in Pakistan and Saudi and across the UAE in the different languages. And their requirements were different. They were more focused on speed of service and improving order accuracy. And, again, with conversational voice AI at the drive-through, we were able to achieve that for them. And it’s going well.

Stevan Dragas: So Sal, I don’t know if that’s a public—well, technically, it’s not—but it’s also looking into where else—how to expand. And ultimately it’s not just the QSR industry, but it’s every place where there is need for either information, either core communication or any discussion, any Q&A. Like we at the moment are working together with one of the world’s largest football clubs, where effectively we started conversational, which very quickly got very positive reception on all the different touch points where a conversational AI or Sodaclick solution can be integrated—from entering the venue to integrating in either restaurants or museums, trying to be very sensitive to the name of the place. There are multiple adjacents in vertical industry opportunities where conversational should be and could be; they can do a lot more natural level.

Salwa Al-Tahan: Absolutely. It’s all about engaging users and creating really positive interactions—memorable interactions actually as well. And I think we’re in an age where everyone has such high expectations. They want hyper-personalization, they want interactive experiences. And it’s almost a case of businesses trying to think, “How can I keep up? What innovation can I bring in?” And conversational voice AI is something that is not just a trend; it actually has a use and benefits as well. But it is part of the trend. It is quite hot at the moment. So, yeah.

Christina Cardoza: Yeah, absolutely. And that was going to be my next question. Because I know in the past we worked together—insight.tech and Sodaclick—and we’ve done an article about conversational AI. But it was in a healthcare space: being able to collect information and do things that maybe a receptionist would have done at a patient level so that the doctor could get the information faster—the patient doesn’t have to wait online, anything like that. So I was curious, from your perspective, Salwa, what other opportunities or use cases outside of the QSR, or what other industries do you see voice AI coming to?

Salwa Al-Tahan: Absolutely. So, other than healthcare, I think definitely wayfinding kiosks—airport concierge, for instance. The benefits are that you can have the conversation AI assistant on a kiosk 24 hours a day; you don’t need to have a member of staff manning that. A customer can come in, or a user can come in, and interact with it.

Even if you think about government buildings—anywhere where there’s a check-in, just like Stevan was saying, anywhere where you might need to ask a question, or get information from—at stadiums, where simple things like reducing queue times by having these interactive touch points where a customer can come in, scan their ticket, ask it where they can get some food from, or where directions to where their seat is, or—all of that information. In an airport, asking where the bathrooms are. Or, again, where they can get a coffee from, or they forgot their headphones—where can they buy some headphones from in a busy airport. This is really useful.

And I think there could be some even more exciting opportunities outside of these ones which we haven’t explored yet. It may be in FinTech as well. And I think it’s just a case of reaching out to and sort of seeing wherever there is a need for these personalized interactions for customers to use these too. And also, part of it is providing more of an inclusive world. Again, I go back and say this, but I think it is partly providing a more inclusive world with voice AI as an additional option to touch. So there’s plenty of opportunities to integrate very seamlessly and very—it all needs to be done very frictionlessly.

Stevan Dragas: So, Sal, I’m sure you will agree, because of the previous discussions. It’s interesting to see how some technologies, even looking outside, how long does it take for certain products, technologies, experiences to actually penetrate? We have a number of examples of, let’s say, certain technologies taking X number of years to reach, let’s say, 10 million subscribers. But then as we are going more and more with something which is more, as Salwa mentioned, inclusive, is more natural, how that timeline actually shortens.

And I think with the conversational AI, almost like at some interception point, where effectively we need to drive people to see and experience it. The moment you learn it—if I just look at I still am having difficulties with teaching my mom how to open WhatsApp on the tablet. But at the same time, when my youngest daughter was born, she was not even a year, she already took the phone and she already knew how to move and how to touch and how—to the level that, effectively, once we get exposed to certain usage model experience or technology, then it almost becomes natural.

For my daughter, the interaction, the touch screen, is the starting point for her. While for my mom, it’s still like some alien technology. So the moment you experience something, you kind of demand that from other usage models. Think about, where else do you stand? You stand in front of every hotel when you go to check in, stand in the line and wait for—all of that could be actually done through the simple kiosk where, effectively, “Hey, this is me.” Passport check-ins at the—you can see at the airports. There is a lot more now of those self-check-in lanes, where effectively you don’t need to queue; you can just go through.

So if we start from QSR, moving to retail, expand to hospitality, healthcare—ultimately any vertical industry where there is any need for either conversation or information-sharing. Sal, you mentioned way-finding. Way-finding was great as an innovative kind of usage model. However, if you suddenly need to figure out and touch, and the accuracy of the touch, it can sometimes—if you need to stand in the queue and try to—and you need to know what are you looking for, it takes so long to type in. Rather than just say, “Hey, where can I find a coffee place? Where can I find. . . .”

So suddenly we are not transforming the technology; we’re just bringing a new usage model to the existing technology. And that is actually—which can actually make those products, those usage models, and those vertical industries adopt certain technologies much faster. And I think we are really at the kind of crossroads of these technologies, that once people get exposed to, but ultimately once people get exposed with certain usage models across certain points, they will expect the same, similar, or even better experience across other adjacent industries. And I think it’s just the beginning of the AI, and we are going to certainly see a big boom in these usage models and experiences.

Christina Cardoza: Yeah, that pain point with the parents being able to use technology—that is something that resonates deeply with me. But to your point, the touchscreen and all these devices and these applications, that is something that maybe my generation grew up with, but not my parents’ generation. Conversation, voice, talking—that is something that we all have been doing since we were born, since we can walk; it’s very natural to us. So being able to implement these across these different industries—they’re high technological advancements and innovations, but it’s a much better user experience, and much more accessible to people than a touchscreen or a kiosk. So I think it’s great, and I can’t wait to see what else comes out of all of this.

I know we are running a little bit out of time, so before we go I just want to throw it back to you. Any final thoughts or key takeaways you wanted to leave us with today? Salwa, I’ll start with you.

Salwa Al-Tahan: I think actually, just picking up on what both you and Stevan were saying, I think we’re definitely in the golden age of AI and technology. And it’s not something that we’re talking about anymore that’s in the near-distant future; it’s here, it’s now. It’s deployable, and it’s very natural, because again, like you say, we’ve all been conversing since we were babies. And with the advent of phones and everyone using Alexa and Siri in our homes and on our phones, it’s just the natural progression.

And because of the benefits that it has across industries, not just in the QSR, it’s just something that we will be seeing more of. And it almost, again, like Stevan was saying, it’s almost a case of when one brand leads with it, the others will follow, because they will all see how much it is improving their business, improving their customer experiences. It’s bringing a higher ROI to them. So it’s very much here and now. And it’s very exciting, actually, to be a part of this. So, yeah, there’s definitely a lot coming.

And, again, I just wanted to—for anyone who does have this misconception that voice-AI systems are going to take away jobs and things like that—I just really want to, again, reassure them that it’s not about taking away jobs, but rather augmenting and helping both the businesses and the customers by streamlining the operations to meet those customer expectations of faster, intuitive experiences. And we can do that with conversation and just by repurposing members of staff. So it is never about taking away a human person’s role, but rather giving them purpose somewhere else.

Stevan Dragas: Yeah. And to that point, what I would like people to remember is not to do technology for the sake of technology but because of what it can bring, what it can enable, what it can drive. In Intel, there is an already long saying: “It’s not what we make, it’s what we enable.” And this is one thing that is becoming prevalent and very important going forward. Demand more. Technology is there. Innovation is unstoppable.

And I think from the conversational AI where we started, where we are going now—it’s just beginning, it’s just tip of the iceberg. There is so much more that if you connect the conversational AI—but ultimately if you connect it on the basic principles of what Intel is doing, which is security on every product, connectivity, manageability, so as long as all of that infrastructure, those applications are safe, are manageable, are connected, are something that is also driving sustainability—ultimately all of these connections and all of these technology points that people integrate, collaborate, talk to, integrate—ultimately all of this can be actually driven in a lot more sustainable way across many vertical industries. And this is just the beginning for Sodaclick, in my personal view.

Salwa Al-Tahan: Absolutely. I mean, all of these core values resonate with Sodaclick’s values as well. And we can pass those benefits on to the customer as well. So, like you say, it’s just the beginning, but it’s definitely very exciting.

Christina Cardoza: Absolutely. I can’t wait to see what else Sodaclick does with Intel. So I just want to thank you both again for joining the conversation and for the insightful thoughts. And I invite our listeners to visit Sodaclick, visit Intel; see what they can do for you and how they can help you guys enhance your businesses. So, thank you guys again. Until next time, this has been the insight.tech Talk.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Healthcare AI Solutions Ease Nursing Duties

I recently visited a healthcare facility for a routine medical procedure and might have inadvertently overtasked the attending nurse: Because I was freezing cold, I ended up asking for a warm blanket three separate times. I am not the only one who finds that bedside call button all too tempting.

While nurses’ jobs are to provide quality patient care, more often than not they are the first line of defense for all patient needs—whether that’s extra blankets, a pillow, or a glass of water. Such requests are not the best use of nurses’ time. “If you use a nursing call bell, the nurse needs to walk to your bed and then deal with your request, return to their station, and then coordinate with the nutrition or maintenance department,” says Paulo Pinheiro, CEO and co-founder of HOOBOX Robotics, a developer of medical optimization solutions.

Given the high rates of burnout and staff shortages among nurses, medical facilities are doing their best to optimize nursing resources. HOOBOX Neonpass, a smartphone app-based solution, addresses these inefficiencies. Looking for ways to use AI in healthcare, HOOBOX developed Neonpass to meet patients’ needs without overburdening nurses. The application recognizes and routes requests to the right departments in the medical facility, bypassing the nursing station when necessary. The app-driven demand and delivery method has earned Neonpass the moniker “DoorDash for Hospitals.”

AI in Healthcare Optimizes Workloads

Using Neonpass enables not just patient communication with professional staff but also routes messages between nurses and other connected departments in a digital format. Where rigid protocols once mandated nurses call other departments in the hospital, medical facilities can now rely on the digital platform to send messages. For example, instead of calling a diet change into nutrition—with a potential for miscommunication—nurses can input the change directly into the Neonpass solution. “With Neonpass you digitalize all the information and nutrition receives an alert; it’s much more efficient and less error-prone than a phone call,” Pinheiro says.

Neonpass includes three AI modules. The first one detects patients’ anomalous behavior on the assumption that messages from patients can serve as a window to underlying medical needs. Frequent requests for water, for example, might indicate a physical problem so Neonpass can alert nurses to check in on the patient earlier than planned, for effective intervention.

“AI will analyze the last medication taken, procedures, exam, and will give a risk score so nurses can gauge severity and prioritize visits,” Pinheiro says. The AI is sophisticated enough to understand that different medications or procedures can trigger events that might otherwise be characterized as abnormal and factors these parameters into its risk score.

Another AI module evaluates the patient’s use of the chatbot embedded in Neonpass for mental health challenges. The module can detect if the user is feeling lonely or suicidal and alert staff accordingly.

The final module delivers generative AI trained on large language models from the individual hospitals. Using Neonpass, professionals can verify safety and fall prevention protocols, for example. The solution complements existing training programs for medical professionals, who can study for certification courses using Neonpass.

AI-driven optimization also delivers business insights through a common platform so management can use information to optimize staffing depending on cyclical demand and even route nurses to floors where they might be needed more.

The final module delivers #GenerativeAI trained on large language models from the individual hospitals. HOOBOX Robotics via @insightdottech

Customized AI Models Lead to Remarkable Results

The HOOBOX team is well aware of stringent regulations regarding the safeguarding of sensitive patient health information (PHI). Neonpass complies with American HIPAA and international protocols. In addition to encrypting data using Intel hardware, HOOBOX delivers extensive employee training “to transform everyone into a human firewall,” Pinheiro says.

Every hospital in Brazil where Neonpass is in use, have registered impressive returns on investment from the solution. For example, Albert Einstein Israelite Hospital in São Paulo reduced nursing requests by 54% and saved 100 hours per month for every 10 beds. And Santa Paula hospital in São Paulo saves an astounding 75% of nursing time using Neonpass.

HOOBOX tailors AI models for each hospital. While doing so in Brazil, the engineers ran into an interesting problem: Because different regions of the country have different dialects and slang, the models needed training on all of these to ensure that the AI solution understands patients from all backgrounds. The Intel® OpenVINO toolkit helps cut down the inferencing time needed to train such weighty models. The solution runs on Intel® Xeon® processors with integrated accelerators, which help process and deliver insights rapidly, Pinheiro says.

The company works with medical facilities to customize and deploy Neonpass for their specific use cases—from figuring out the departments that will participate in the solution, to installing QR code plates at bedsides, to training hospital-specific AI models. Most hospitals start with the nursing, nutrition, and maintenance departments before expanding the solution to include other verticals.

The Future of Healthcare AI

Using Neonpass helps patients quickly access information about procedures, exams, and tests so they can be more involved in their own treatment. “We think this is the future, delivering the most relevant patient information at the right time is otherwise a big challenge for patients,” Pinheiro says.

He also expects Neonpass to evolve to provide continuity of care beyond the medical facility. Follow-up calls to patients decrease readmission rates, but such measures are not very scalable, Pinheiro points out. While the method of care delivery will still be through the app, moving to a wearable device is also a possibility. By delivering their API to other communication platforms, Neonpass can find new avenues by which it can prioritize patient care while decreasing burdens on medical professionals.

Neonpass expects to grow its reach beyond Brazil and expand into the North American market as well. So maybe the next time I need a warm blanket at a hospital, I will no longer need to bother the attending nurse but use the Neonpass app instead.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.