Smart Health Solutions: A New Model for Eldercare

An aging global population requires a new way of thinking about healthcare. There is an urgent need for eldercare medicine to shift away from simply treating illness, and to focus more on improving seniors’ overall health and well-being to enhance their quality of life.

One effective way to do this is through a daily exercise regimen designed to increase muscle strength and prevent functional degeneration due to muscle loss. But such programs are difficult to implement. They require a high degree of personalization and supervision to ensure safety and progress.

Sometimes this means self-guided exercise routines are not effective—either because they aren’t tailored to the individual, or because the exercises are being performed incorrectly. “We’ve heard from many physicians who are frustrated by this,” says Patricia Lin, Project Manager for Netown, a manufacturer of AI-enabled smart health solutions. “A doctor will give an elderly patient an exercise program to do on their own, but when they return for their follow-up appointment, they’ve actually gotten weaker.”

Healthcare providers find themselves in a difficult situation, since in an era of medical staffing shortages, it’s just not feasible to personally supervise their patients’ workouts. But recent advancements in IoT and edge AI have given rise to smart exercise solutions that address these challenges.

With #IoT in #healthcare, #EdgeAI, and interactive digital displays, smart exercise solutions are a game changer for personalized, senior healthcare. Netown via @insightdottech

IoT and Edge AI Enable Smart Healthcare Solutions

Smart health solutions help seniors exercise effectively without the involvement of a doctor, nurse, or personal trainer.

At first glance, these solutions may resemble ordinary exercise machines. For example, Netown’s Babybot Smart Exercise Series, a health kiosk and data control hub that enables networked exercise devices, comprises eight different pieces of weight training equipment that work different muscle groups. They’re similar to what one would find in any commercial gym, but engineered for lighter loads and gentler exercises.

And with IoT in healthcare, edge AI, and an interactive display as the user interface, smart exercise solutions are a game changer for personalized health and senior healthcare.

The system is designed to provide a personalized experience and facilitate individual data tracking. Users log in using a unique ID such as an RFID card or QR code, and are given a simple-to-follow video tutorial that shows them how to exercise. They receive feedback and encouragement via the interactive interface.

IoT sensors measure the amount of force produced by the user, as well as the speed of their motions. Edge AI uses this data to calculate the person’s muscle strength. During the first session, this helps to set a baseline to create a personalized training plan. In subsequent workouts, the system automatically adjusts the difficulty of the exercises in real time to build strength. The AI can also determine if a someone is performing the movement improperly or unsafely, offering correction and suggestions as needed (Video 1).

Video 1. Smart health solutions combine IoT, edge AI, and an intuitive UI for efficacy and ease of use. (Source: Netown)

Despite their familiar appearance, these are complex, technologically advanced solutions. For this reason, Netown’s technology partnership with Intel has been particularly valuable. “Intel processors are extremely well suited to edge AI and real-time processing workloads,” says Lin. “The Intel® OpenVINO Toolkit helps as well, since the AI model was already there to leverage. We only needed to train and fine-tune it, which significantly shortened our development time.”

Improved Strength, Greater Independence

For senior citizens, smart exercise equipment is an effective means of maintaining health and mobility—or even regaining lost independence. Netown’s experience with a community fitness center in Taiwan offers an example of how this works in practice.

In this case, an 85-year-old man suffering from a profound loss of strength and mobility was referred to the fitness center.

Ordinarily, such a patient would be a candidate for closely supervised physical therapy. But Taiwan’s overstretched healthcare system was a part of the challenge in this case, since medical personnel at public hospitals and physical therapy centers are chronically overwhelmed.

Because Netown had already deployed its Babybot solution at the fitness center, the patient was able to follow a guided exercise program right in his own community. He began using the smart exercise equipment regularly, following instructions provided by the interactive UI. He found encouragement in the gamified incentive system: a ranking chart that let him measure his progress against other Babybot users, adding a bit of friendly competition to the experience.

The results were remarkable. After only three months, the man’s lower limb strength increased by 76.5%. But those numbers, while impressive, can’t fully convey the human element of story. The simple, day-to-day outcomes offered by smart health solutions are often the most meaningful, says Lin, “When elders are able to manage daily life by themselves, that’s real independence. It means everything to seniors and to their families. I’ll never forget the grateful look on the face of his grandchild when they were able to take a walk in the park together.”

A Healthier Future for Everyone

In the coming decades, smart health solutions will become increasingly relevant. Near term, the technology helps bolster health, independence, and quality of life for the world’s growing senior citizen population.

And beyond eldercare, there are promising use cases in other areas of medicine as well. “These solutions are going to be especially useful for long-term treatments like rehabilitation,” says Lin. “Physicians will one day be able to help patients do physical therapy in their own homes.”

The social value of smart health solutions is how they will help our world move toward a more expansive vision of health and well-being. In the future, getting stronger and preventing illness will take precedence over the mere detection and treatment of sickness—and people of all ages will have greater independence and control over their healthcare.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

3D Holograms Redefine the Immersive Retail Experience

Immersive retail is what physical retail stores need as they compete with the convenience of online shopping. In 2020 alone, the United States saw a headline-making 43% increase in e-commerce sales and has continued to see a gradual rise since.

“Recognizing the competition, in-store retail is evolving to differentiate itself from its online counterpart,” says Matt Tyler, Vice President of Strategic Innovation at Wachter, a technology integration and services company. “Retailers are looking to create a destination that lures customers back in.” And holographic technology is advancing immersive retail experiences to do just that.

Wachter is leading the way with its Proto Integrated by Wachter solution, an innovative display system for creating, managing, and distributing holographic content. It enables brands to beam their live and previously recorded content from a multimedia studio or mobile app.

From there, the PROTO cloud beams the content to device locations, even thousands of miles away. End users can interact live with people via displays at those locations through a 3D holographic projection that enables unique, impactful guest experiences.

The Wachter solution, Proto Epic, has three components: a standalone seven-foot unit with a transparent LCD monitor and a light box, a studio kit to create content, and a Live Beam kit with a 4K camera to beam content into any Proto device anywhere in the world. “The experience can be made even more immersive by integrating lighting and audio systems,” Tyler says. “It’s really meant to draw in all the senses.”

From shoppers looking for personalized immersive retail experiences to celebrities interacting with fans, Proto’s holographic display and capabilities allow for two-way interaction in real time.

Advancing Retail Insights Via Augmented Reality

Experiences that wow are part of the appeal of holographic teleportation, but the technology is about more than just theater. Using Proto, retailers can gather valuable intelligence that can increase revenue. An Intel® RealSense camera embedded in each unit can anonymously track shopper traffic and behavior. The Wachter integration crunches these records to “extract valuable data that marketers are looking for,” Tyler says.

The Proto unit can also dynamically change its content depending on the audience. A loyalty card swipe can yield even more tailored material by connecting, for example, online shopping behavior with in-store beamed content.

Wachter deploys Proto to scale business and provide end-to-end solution integration for the customer. A team of certified experts design, architect, procure, install, and maintain power and cabling needs across venues. The integrator also helps customers manage their content and drive shopper analytics—analyzing metadata from video feeds—and pipes this information into a dashboard for easy visualization. Proto uses AI to sift through the video analytics data and customize streamed content in real time.

Intel® technology drives data processing on the Epic unit as well as in-studio production of content. “There’s an Intel high-performance processor that is capturing all the video and syncing that back with the units themselves out in the field,” Tyler says. “The cloud, used for storing and redirecting content, also uses Intel architecture. Intel is the common glue between all the different pieces of the solution.”

Holographic Technology Offers Endless Use Cases

Proto’s ability to have 3D projections of sought-after experts or celebrities into stores is a draw for an adventure retailer with close to 60 locations. The retailer’s biggest points of sale are in Manhattan, where they are set to use Proto to beam outdoor guides from more rural areas. A one-on-one consultation with a guide gives a different meaning of the term “virtual shopping,” according to Tyler. For instance, customers get to see what kinds of flies fish would bite on, or the clothing to wear for the weekend’s outdoor conditions—and could potentially increase the average spend per customer.

From #shoppers looking for personalized experiences to celebrities interacting with fans, #holographic capabilities allow for two-way interaction in real time. @WachterInc via @insightdottech

A fashion retailer plans to use Proto’s Epic alongside shelves of apparel. The apparel is to be part of a fashion show streamed through the unit. For each piece that the model wears, the corresponding item on the shelf will be highlighted with special lighting. “It’s a fun way to take the guesswork out of the buying process and allows people to make a decision much faster,” Tyler says.

Tyler expects different types of hologram technology to gain traction in other sectors as well. A potential implementation is to help government officials meet with global representatives while keeping travel to a minimum. “We see the collaboration, the communication, the energy savings, all coming together,” he says.

In another use case, a museum is exploring Proto as a way of demonstrating artists’ work. In higher education, a professor in Massachusetts can give a lecture that can be beamed to students in different campuses around the world. Because the hologram technology is bidirectional, students in turn can participate.

“The capabilities to create that three-dimensional experience and facilitate a conversation at the same time, the way you ingest content is what is transformative,” Tyler says. “I don’t think any other technology can deliver quite that capacity right now.”

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

This article was originally published on November 3, 2022.

Endless Aisle Transforms In-Store Retail Digitization

When it comes to buying clothing, it’s no question that brick-and-mortar stores provide the superior experience. You can feel the fabric, discover new styles, and ensure the right fit before you walk out the door with your new outfit. But what physical retailers can’t do is provide unlimited options and relevant recommendations like online stores; they’re limited by square footage, inventory management, and staffing constraints. But store digitalization technology can bridge the gap, helping these retailers become bigger players in an omnipresent marketplace.

Physical stores have an advantage, says Oskar Jakobsson, Chief Product Officer for Ombori, a smart-store solution provider. “They usually have a longer legacy while eCommerce are newcomers,” he says. “We’re seeing omnichannel retail pushing the boundaries and stepping more into the physical space, but old-school stores have fiscal and brand presence and High Street access. They can offer more value if they do it right.”

One way to do it right is by installing interactive retail kiosks equipped with endless aisle technology, such as Ombori Grid. The solution enhances the in-store experience and drives greater conversions by satisfying shopper demand for variety. And it fulfills the customer expectations of the always-connected generation that considers retail a multichannel experience.

“If you want to be relevant, you need to cater to customers across all channels, including social platforms, online, and the physical space,” says Jakobsson, who headed up the innovation department for global clothing retailer H&M prior to joining Ombori. “Your physical location cannot be unconnected, which has been the case in the past. It must be part of the omni journey.”

“If you want to be relevant, you need to cater to #customers across all channels, including social platforms, online, and the physical space.” – Oskar Jakobsson, @ombori via @insightdottech

“Endless Aisle” Technology Enhances the In-Store Experience

Ombori Grid teamed up with Microsoft to modernize retail businesses by extending the digital experience—where most modern customer journeys begin—to the in-store experience. The retail kiosk solution consists of a touchscreen mounted to a stand. Barcode scanners and RFID readers allow customers to pull up product information. The Ombori Grid platform is loaded with the “endless aisle” software that populates the inventory. And the solution is powered by the Intel® NUC mini-PC, which runs all the IoT edge components with frictionless speed (Video 1).

Video 1. Ombori Grid bridges the gap between online and physical retail by using “endless aisle” technology to create an omnichannel experience for customers. (Source: Ombori)

The Ombori Grid solution also incorporates a customer’s mobile device. “It’s very much an interaction between the screen and your mobile phone,” says Jakobsson. “You can start at the kiosk with a larger screen that can be more engaging. You can scan a QR code and get the basket you have built on the endless aisle. You could then do the payment or size and measurement part on your mobile phone, which is more private.”

In addition, Ombori has partnered with Pathr.ai, leveraging the retailers’ current cameras to collect spatial intelligence insights. The integration helps stores improve their retail strategies by analyzing the effectiveness of the Ombori Grid platform and determining where customers go after using the kiosks.

The Benefits of Endless Aisles for Omnichannel Retail

Ombori Grid helps increase conversions through a more optimized order fulfillment process starting with a greater assortment of goods and ending with a seamless journey that can start online and wrap up in the store. One example is a large Asian retailer that recently installed the solution. Customers who come into the store can complete their transaction even if the item they want isn’t available, and have it shipped to their home. The retailer can scale the sale by highlighting products that are related to items in their basket. The solution also has RFID integration that detects the products customers are holding when they walk up to the kiosk.

If a customer is holding a pair of shorts, for example, the solution will show them t-shirts, sneakers, or other summer gear, or it will display the shorts in other colors or styles. “You fulfill the complete customer need, not just the product they picked, which can increase conversion,” says Jakobsson. “The store gets to sell more, and the customer gets recommendations to things that are relevant to them. Endless aisle is a knowledgeable, value-adding salesperson. It can answer questions and be relevant and helpful from a customer point of view.”

The Future of Endless Aisles and Retail Digitization

Endless Aisle technology and solutions like Ombori Grid help retail stores keep up with the changing retail landscape, such as the trend toward small-format stores. Companies save on rent costs while building relationships with their communities.

“Post-COVID, we see that customers want to get back to the stores, but they have more experience with online retail and expect the store to be something new,” says Jakobsson. “It needs to be more local, tailored, and relevant. Friction-free, building on the company promise, and driving omni sales and experience.”

Technology is merging online and in-store shopping to create a cohesive experience. Are you ready to step into the future of retail where digital and physical become one?

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

This article was originally published on November 2, 2022.

Cutting Costs and Emissions with Smart Energy Services

Climatologists fear recent wildfires and heat waves across northern latitudes are an indication of what’s to come without rapid and forceful response to climate change. But as we’ve learned in the past few decades, slowing global warming requires much more than just “going green.” It means incentivizing energy producers and consumers where they’ll feel it today: their wallets.

For instance, building HVAC systems and fans account for 10 percent of global electricity consumption. So cost-sensitive energy stakeholders may want to start looking for opportunities in areas with extreme climates.

How SES Saved 20 Percent and 15 Million Miles of Emissions

Building operations teams must find ways to lower high energy bills while helping management operate and maintain the facility in a more sustainable manner. To do so, many turn to low-Capex programs such as the Smart Energy Service (SES) from DOTS Tech Systems, an IoT services provider specializing in development of connected applications.

Traditionally, existing facility systems and equipment operate manually irrespective of demand. Through SES, the DOTS team conducts a detailed survey to come up with the most technically feasible and economically viable solutions. The prime objectives are to optimize energy consumption in a way that does not disturb the operations, improves indoor environmental quality, and enhances occupant comfort. DOTS uses the latest IoT sensors, gateways, and an enterprise grade Smart Energy Services cloud-hosted platform in a SaaS model.

This IoT-driven solution is offered as a subscription service, which benefits many customers where sustainability is top of mind. The DOTS team has already successfully delivered and implemented SES in some prestigious flagship projects from companies such as DP World, Department of Public Works, Empost, HCT, and many more places all over the Middle East.

The cost and #CarbonEmissions savings enabled by #SES are impressive, but equally as impressive is the support to enhance equipment uptime by data-driven condition #MaintenanceManagement. @dotstechsystems via @insightdottech

In one customer example, DOTS integrated the building systems using optimum last-mile connectivity protocols and had SES up and running at the facility where it measured the building’s energy profile in real time, logged activities, and provided access to energy, maintenance, and environmental metrics to service subscribers. This information was then displayed in an intuitive dashboard that allowed DOTS and client-side stakeholders to examine real-time and historical performance trends as they diagnosed the cause of the inefficiency (Figure 1).

DOTS Smart Energy Services dashboard showcasing a company’s clean energy overview.
Figure 1. DOTS Smart Energy Services (SES) provides an intuitive user dashboard so stakeholders can review real-time energy use data and historical trends as they work to minimize cost and carbon emissions. (Source: DOTS Tech Systems)

In that example, the culprit was the facility’s HVAC plant, a large evaporative cooling system that’s common in buildings of more than 100,000 square feet but can be simultaneously energy guzzling and wasteful if not managed properly. After pinpointing the cause, the DOTS team—in collaboration with the building operations team—used the SES analysis to implement various energy conservation measures (ECMs), including:

  • Implementation of SES auto alerts on performance deviation
  • Installation of Chiller Plant Manager for demand-based auto operations
  • Replacement of 3-way to 2-way CHW Valves
  • Installation of VFDs on chilled water pumps with index point DP sensor
  • Installation of motion/occupancy-based sensors for lighting
  • Installation of CO sensors for car park ventilation fan operation
  • Shifting from manual to completely automated operations with appropriate setpoints
  • Operating schedule and night setback modes for various equipment/systems
  • Peak Energy Demand Response and operational optimization driven by smart algorithms
  • Continuous commissioning on major equipment and systems
  • Energy monitoring and targeting
  • Training and awareness sessions, and CSR alignment sessions

In the reporting period after these changes were implemented, the facility exhibited energy savings of 8,415 Megawatt Hours (MWh) and 5,470 tons of CO2 emissions reduction per year. That represented a 20 percent cost savings for the building, and the removal of greenhouse gas emissions that would be generated by driving the average passenger car 14,549,329 miles.

Cutting Costs & Carbon Emissions in the Real World

The cost and carbon emissions savings enabled by SES are impressive, but equally as impressive is the support to enhance equipment uptime by data-driven condition maintenance management.

SES software runs in the cloud as well as on Intel® Next Unit of Computing (NUC) 10, 11, or 12 mini-PCs at the edge. DOTS deploys a multiprotocol edge firmware on the NUCs, which feature Intel® Core processors and provide ample performance to communicate with equipment sensors and run edge analytics. Execution of control logic happens automatically when there is a deviation in the sequence of operations. Advanced fault detection and diagnosis is also conducted at the NUC edge and processed data is transmitted to the cloud for reporting purposes.

What’s more, DOTS helps make SES a fraction of the cost of competing smart energy systems, being a smart and lean design.

“The firmware we use at the edge is a kind of multiprotocol data exchange system as we work agnostic to any makes of systems we may find at the building’s end,” explains Dheeraj Singh, CEO of DOTS Tech Systems. “We deploy or download that into the NUC, which has the ability to deal with different kinds of drivers and required processing computing powers.”

“By being a multiprotocol data exchange layer at the edge, it does data normalization,” Singh continues. “It’s either a soft integration directly using the protocol, or there might be a range of sensors data and they in turn communicate with the NUC. Then the NUC communicates with the cloud, where we have an advanced analytics engine to process this data.”

DOTS has harvested more than 30 million data points that inform regression algorithms the SES uses to reach energy optimization outcomes. Control decisions based on this analysis are passed back to NUCs at the edge, which relay those commands along with any others related to rules being housed at the edge, on to the endpoint. These control algorithms are customized to cater to the building operational trends. The algorithms also take inputs from the building’s occupancy per area to achieve demand-driven ventilation optimization for the building; this would enable the building to perform efficiently during a peak energy demand scenario.

Smarter Energy Savings: Not an Outlier

DOTS case studies demonstrate just how effective smart energy can be, both for the environment and the economy. And it’s no outlier. In fact, it may be an outlier in the wrong direction.

“It’s a digital transformation program and has shown quite good results for clients,” says Singh. “If they’re paying X, post this program, it will be reduced by 25 to 35 percent based on how the building is managed at the start point. Yes, there is a cost to have this program, however there is a bigger cost in not having this program.”

“Those kinds of value propositions are directly impacting the bottom line as well as supporting us in doing our bit towards the environment,” he adds.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

AI Video Analytics Offer Valuable Insights, Ensure Anonymity

Wherever you go these days, be it a shopping mall, subway station, or public square, it seems like cameras are high and low. But it’s not just for security purposes anymore. We’re living in a data-driven world. And thanks to AI video analytics, information collected from cameras provides new ways for organizations to garner valuable customer insights—improving operations and experiences.

For instance, a transit system looking to ease congestion at rush hour may anonymously collect data on how many people stand on a platform or how crowded the subway cars get. The transit system can act on this information by adding cars and adjusting schedules, with a deterministic method of confirming that changes improve the quality of service for customers.

Or on a factory floor, a manufacturer may collect statistics on the movements of assembly line workers to help keep them out of danger around robotic arms. It’s easy to understand when machines start and stop, but it’s much more difficult to accurately understand human physical behavior, especially as it relates to human-machine interaction. Very often, statistical data collected over time will reveal occurrences of events that human observation can’t catch. This is where computer vision and AI can come into play.

“The advancements in AI have unfolded a new chapter in video analytics. By employing sophisticated algorithms, it’s now possible to analyze movements and behaviors without intruding on personal privacy,” says Anisha Udayakumar, AI Evangelist at Intel. “The transition from traditional object detection models to more privacy-centric models highlights the progression towards ensuring anonymity while deriving valuable insights from video data.”

A solution that combines video analytics and faceless-AI offers significant opportunities for organizations across sectors, including:

  • Operational optimization by streamlining processes and allocating resources
  • Safety enforcement through monitoring patterns in behavior to improve safety regulations
  • Enhanced customer experience by gaining behavioral insights while preserving personal privacy
  • Utilization of existing video infrastructure for cost-effective and accessible software deployment
  • Gaining a competitive edge by accessing real-time insights that can influence long-term business decisions

AI Video Analytics Offer Customer Insights

Knowing the value of video data, C2RO, an AI-SaaS video analytics provider, set out to create a new type of advanced video analytics platform: ENTERA.

ENTERA uses an organization’s existing security cameras and runs in a secure and private edge environment for video analytics.. According to C2RO CEO Riccardo Badalone, the platform requires minimal, if any, hardware investment to deploy, and produces highly accurate and fully anonymized data.

“If you tell a customer that already made a huge global investment in video security systems that they have to replace all those cameras with another, more expensive type of camera, there’s no way they will adopt your technology,” Badalone says.

But what really sets ENTERA apart is its ability to comply with a common customer request: “Tell us where our visitors go. We don’t want to know who they are, but tell us where they’re spending their time,” Badalone explains. “All data generated by the system is explicitly not derived from visitors’ faces.”

This frees organizations to analyze customer insights on user behavior and traffic patterns while complying with the strictest privacy laws such as Europe’s General Data Protection Regulation (GDPR).

Instead of uniquely identifiable information, ENTERA delivers insights on demographics. For instance, let’s say a mall wants to better understand its foot traffic. AI models can determine the percentage of people simply walking through the property versus those who came in for a purchase, and how long customers tend to spend in specific areas.

“It highlights traffic patterns that are correlated to demographic groups, and this allows for A/B testing and performance tracking for any kind of marketing initiative,” Badalone explains.

Developing #ComputerVision #technology requires a thoughtful approach that demands a constant stream of creative and innovative solutions for the series of intricate technical challenges that arise. C2RO via @insightdottech

Building an AI Video Analytics Platform from the Ground Up

Developing computer vision technology requires a thoughtful approach that demands a constant stream of creative and innovative solutions for the series of intricate technical challenges that arise.

Technology that rigorously refrains from collecting or processing personally identifiable information requires advanced algorithms and stringent data protection measures.

This means figuring out how to track and identify objects and individuals in complex, real-world scenarios while intentionally forgoing any reliance on conventional methods of recognition. To do so requires integration of cutting-edge and highly efficient algorithms to successfully reach the ultimate benchmark goal of real-time processing of data analytics.

This is where the Intel® OpenVINO toolkit comes into play. In collaborations like the one with C2RO, OpenVINO demonstrates its capability to facilitate development of complex solutions that are easily scalable.

The toolkit is instrumental in transitioning from traditional video analytics to AI-enhanced solutions by optimizing models for diverse Intel hardware like CPUs, GPUs, and FPGAs. Its flexibility allows efficient real-time data processing in various environments with minimal hardware modifications.

OpenVINO’s support for direct inference from various source models streamlines integration of AI into traditional video analytics systems. And quantization is crucial for improving inference speed and reducing memory footprint. This optimization technique is essential for deploying high-performance AI models in real-world applications, like C2RO’s ENTERA platform.

Customer Insights with Scalability and Reach

Looking forward, Badalone says C2RO wants to make the ENTERA platform fully software-defined, where the capabilities can be added on demand by customers to further simplify adoption, and to create an environment where data analysis teams can collect information on a rolling basis where and when they need it.

Badalone envisions a future where AI training and system configuration are fully automated. “In the case of AI learning, we want that to be completely abstracted, with no human intervention,” he says.

Ultimately, the idea is to give a customer the flexibility to add and remove capabilities, and centrally move the platform from one site to another. And that, he says, will inspire organizations “to rely on the data more, because they are always going to get a massive return on investment.”

 

This article was edited by Leila Escandar, Editorial Strategist for insight.tech.

This article was originally published October 31, 2022.

Bringing Digital Twins to the Factory Floor

Imagine a crystal ball that could tell you whether a future project would be successful, or whether it would suffer fatal glitches along the way. Now imagine that crystal ball as more of a mirror image. Welcome to the world of digital twins.

For manufacturers, and the whole factory environment, this concept has exciting implications, and it’s been drawing a lot of interest recently. Martin Garner, COO and Head of IoT Research for CCS Insight; and Ricky Watts, Industrial Solutions Director at Intel, break it down for us. What exactly is a digital twin? What challenges might manufacturers face with this new technology? And what are the short- and long-term benefits? CCS Insight also shares its research on the topic in a white paper now available to insight.tech readers.

What exactly is a digital twin—especially in the context of manufacturing?

Martin Garner: I like the view from the Digital Twin Consortium—the industry body around these things—that a digital twin is a virtual model of machines, factory processes, and other things that exist in the real world. There needs to be some sort of synchronization between the real thing and the virtual model, and that could be in real time or it could be much slower than that. You also need to know the quality of the data that’s being synced, and how frequently that’s happening. That might all sound quite simple, but there are a lot of layers going on.

For manufacturing you can think of a James Bond-style diagram on the boardroom wall, where the live state of all the operations are there in one view. From there you can analyze processes and look at wear rates, and you can do predictive maintenance, process modeling, and optimization. You can also do staff training without letting people loose on the real machine. One of the uses of digital twins that I really like is software testing and simulation. For example, you can do a software update on the virtual machine first, validate it, make sure it doesn’t crash or break, and then download the software to the real thing.

We’re also now starting to think about a grander, scaled-up vision of digital twins. Factories are at the hub of very large supply chains, so why not have a digital twin of the whole supply chain? That might even include a machine that you’ve supplied to a customer to see how things are working at that level. Tesla does that with its cars.

Ricky Watts: I would add that, if I think about why digital twins exist and where they come into the picture, really it relates to data. As we’re starting to see more and more data coming out of factories, we need to be able to intelligently understand that data before it’s applied. What a digital twin is, to some extent, is a way of representing the data as it’s coming out of the machine—making some assessment of that data, and understanding it before applying an output or an outcome.

“There are some very good uses that have really good payback times; #PredictiveMaintenance and #software testing are two of the key ones.” – Martin Garner, @CCSInsight via @insightdottech

What are some of the challenges that come with these new digital technologies?

Ricky Watts: One of the challenges is that manufacturers are not generally people who really understand AI and machine learning; we do have a skills gap, to some extent. So how do you implement something like this within the workforce that you’ve got today? Another thing, of course, is that this is all relatively new: How do you trust the data? How do you apply it? I think those are some of the challenges with AI and ML as well.

There are also small- and medium-sized businesses that represent a huge amount of the industrial footprint. So bringing scale into these solutions is important—scaling a digital twin for a car manufacturer versus one for somebody who makes screws for a car manufacturer. And it’s about making sure that we don’t just empower manufacturers; we also need to empower an ecosystem to be able to go out and service these models as well.

These are some of the things we’re certainly working on at Intel: how to simplify consuming some of these technologies; and building partnerships and ecosystems to bring in the infrastructure. There’s a lot that goes on behind the scenes to do that.

How can manufacturers successfully adopt digital twins?

Martin Garner: In the fullest version, digital twins can be quite a long-term project across both OT and IT. In that case it’s really not a quick fix. And in the current economic climate, some companies may hesitate to step into that bigger, long-term project. But using digital twins for short-term gain can be done. There are some very good uses that have really good payback times; predictive maintenance and software testing are two of the key ones. The trick is to make sure you get a properly architected system, one that is open enough to build up the ecosystem, plug in other machines, and expand toward the fuller vision.

Ricky Watts: We’re technologists; we see this huge potential. But right now I think concentrating on the needs of manufacturers is crucial. These manufacturers are very much focused on how they’re going to survive—probably in a very tough fiscal environment for the next few years. It really is around getting very tactical: what can I do with something today that’s going to give me a benefit tomorrow—not next week, not the week after, not next year.

Yes, predictive maintenance is a great example. If you’ve got a digital twin that represents some part of your machine, and that digital model tells you that your machine is going to fail, then you can fix something before it happens. If you do that, you keep your factory operating.

So, focus on something that’s going to add near-term value. That has two benefits. One, it’s solving a problem for today. And, two, it allows manufacturers to start learning themselves. It gives you a near-term outcome to address some of your near-term challenges; but also, long term, it allows you to expose your workforce to the use of data in those environments. And those opportunities in the near term could actually benefit us in the mid- to long term, with progress toward more expansive use of digital twins.

What skill sets do manufacturers need to have available for this process?

Ricky Watts: You can’t create data scientists en masse. So, what can you do to effectively turn, say, a process-control engineer in an oil and gas plant into a data scientist without needing 10 years of training? We’re creating tools and capabilities in the background to repurpose the skills of that process-control engineer. To say, “Use the skills you have right now to tell me what’s going on. And then we’ll apply that to the data models and the digital twin models in a very simplistic way.”

In a sense, I do think skills are going to be needed: How do you install compute? How do you look after it? But let’s not lose the benefit of those process-control engineers: They know the outcome. They know when something’s wrong in their manufacturing. We can translate that into compute code that sits inside the digital twin and our models and our AI, and that then recognizes the issue.

What are the tools and technologies needed to implement this approach?

Martin Garner: One of the bits people find hardest is just getting the data feeds organized and set up in a way so that the data is compatible. Different sensors and different machines present data in a whole variety of ways because they weren’t expected to have to be compatible. Factories are complicated things. They have a whole range of different machines and technologies—technologies at all levels of the technology stack, from connectivity all the way up to AI. That makes it very hard to do any kind of templating.

What that means is that for a larger system there might well need to be quite a lot of systems integration work to really get the value. I think you can start small and simple, start getting value from it in one small area, and then progressively build out from there. But it quickly becomes a bigger project as you scale it up.

Ricky Watts: One of the things we are doing is trying to create uniformity through factory standards, using universal languages such as OPC UA. That means using a universal language for the machines to talk to each other so that every machine understands every other machine, at least to some extent.

Martin Garner: And the great thing about that is that it turns a stove-piped, vertical, proprietary thing into much more of a horizontal-platform approach. That’s much better for building out scale across manufacturers, supply chains, different sectors, and so on. It’s just an all-around better approach.

How is Intel® working to make digital-twin concepts successful?

Ricky Watts: What Intel does extremely well—in addition to obviously building those wonderful compute platforms that run these things at the edge of the network—is look at scale and at creating standards. We’re working with industry partners as part of foundational efforts to create coalitions to identify how to create these standards. We’ve been doing that in the oil and gas industry around what they call the OPAF, the Open Process Automation Forum.

We’ve been looking at compute platforms—making sure that we’re bringing through the technologies that manufacturers are going to need. For example, they need to be synched on atomic clocks so that data on one platform is synched with time stamping to data on another platform. We’re enabling the software ecosystem to use these capabilities, making sure we validate with Linux operating systems, with Windows, with virtual machines, with Kubernetes—all these wonderful things that are basically software-infrastructure layers allowing us to run the applications.

And of course working with the end-user community to make sure we’re not creating Frankenstein’s monster here.

Any final takeaways to leave us with?

Martin Garner: The full vision of digital twins might include something like planetary-scale weather and geological systems that can help us better understand global warming and things like that. But against that there are lots of smaller companies that really don’t know where to start. So we need to make it easier for them and more worthwhile for them to invest in this concept.

That means really focusing on the short term: how to save money using digital twins in the next quarter, how to make them easier to buy and set up. The vision is one thing, but we need to pull along the mass market of people who might use this as well. We can’t just do one or the other; we need to do both.

Ricky Watts: I think Martin is absolutely spot on. Keep it small, keep it simple. We do have solutions that are available to start you on your journey, and we’re really very much focused on what your problem is today.

Related Content

To learn more about digital twins in manufacturing, listen to the podcast The Role of Digital Manufacturing Operations and read CCS Insight’s white paper on the topic. For the latest innovations from CCS Insight and Intel, follow them on Twitter at @ccsinsight and @Inteliot, and on LinkedIn at CCS-Insight and Intel-Internet-of-Things.

 

This article was edited by Erin Noble, copy editor.

AI Trucking Revolutionizes the Driving Industry

The American Trucking Associations (ATA) reported a shortage of 80,000 drivers in 2021 and expects that number to reach a historic high of 160,000 by 2030. And two years later, the numbers still look just as grim. The driver shortage is a bad indicator of the digital economy that relies on the ability to transport goods from remote retailers to remote consumers.

But how do you solve this growing problem? With the latest advancements in artificial intelligence and automation, technology may be the answer.

AI Trucking Offer a Driver Shortage Solution

We’ve heard about autonomous driving or self-driving cars mostly in the consumer context as of lately, but this capability is also being applied to the trucking industry.

The benefits of autonomous trucks are obvious. A self-driving semi doesn’t stop to sleep, take bathroom breaks, or go on vacation. They’re capable of transporting cargo 99% of the way to a destination. Finally, AI trucks—when optimally designed—are safer for everyone. And that’s where the challenges come in.

Developing self-driving trucks that are as safe or safer than human drivers come at significant cost. In real dollars, the advanced sensor suite required to make safe self-driving a reality can cost tens of thousands of dollars. There are also many hidden costs in tailoring autonomy systems to the exact use case and deployment environment. Much of that work is fallout from the need to capture massive amounts of data and analyze it with AI inferencing algorithms in real time.

Optimizing Safety with AI Technology

The near-zero latency requirements of the use case mean that data analysis must happen locally so control subsystems can integrate information from the AI perception stack in time to act. The sheer amount of data and processing performance involved in these operations requires a full-blown server with GPU acceleration hardware.

And remember, autonomous trucking is a rugged, mobile environment that may or may not be temperature controlled.

“The more situational awareness you’re looking for around the vehicle, the more sensors required. Thus the higher the compute load, which generally requires more power,” says Jim Shaw, Executive Vice President at Crystal Group, a leading designer of rugged computing hardware. “And you can imagine the thermal challenges that kind of hardware creates because it’s cranking pretty hard.”

Thermal Management: The Hidden Cost of Autonomy

For example, TuSimple is a San Diego-based autonomous trucking company that develops self-driving perception stacks exclusively for long-haul semi-trucks. To successfully operate at SAE Level 4 Autonomy, which is fully capable to drive autonomously without human intervention, it needed an onboard computer platform with at least two GPUs to meet the real-time data processing requirements. Mechanically, the system had to manage heat dissipated by processors and be able to withstand the inherent shock and vibration of the use case.

“The more situational awareness you’re looking for around the #vehicle, the more #sensors required. Thus the higher the compute load, which generally requires more power.” – Jim Shaw, @CrystalGroup via @insightdottech

To achieve its compute and thermal requirements, the company turned to Crystal Group and its AVC5904 AI & Autonomy Solution, a custom rugged server prototype built on COTS components and designed to thermal-test profiles (Figure 1). The AVC5904 features dual Intel® Xeon® Scalable processors flanked by three GPU accelerators and 384 GB of DDR4 memory in a 19” rackmount form factor that was built to withstand the shock, vibration, and heat of an autonomous trucking environment.

Crystal Group’s AI computer solution designed to support autonomous trucking.
Figure 1. The AVC5904 AI & Autonomy AI trucking solution is designed to meet the data processing, thermal management, shock, and vibration requirements of autonomy. (Source: Intel)

The AVC5904’s Intel® Xeon SPs handle general system management, communications, and image pre- and post-processing, while GPU acceleration cards perform parallel processing of video, radar, Lidar point clouds, and other computationally intensive workloads. The system also supports eight or 12 removable SSD bays that offer more than 1 TB of onboard storage capacity for local data logging.

After developing four air-cooled AVC5904 prototypes, it was determined that more performance scalability would be required to future-proof systems. For instance, an advanced thermal management solution would be needed to enable autonomous trucking operations in extreme environments like the desert Southwest.

The commercial version of the AVC5904 added support for a fourth GPU card slot, as well as liquid cooling mechanisms to manage all the heat generated by multiple CPUs and GPUs in the same rugged chassis.

“When you go past 150 W on a CPU and 175 W on a GPU, you are clearly in dangerous territory,” Shaw explains. “It’s right when the system hits about 1500 W that we start to get really worried unless we’re liquid cooled.”

“What we’ve had to start doing is machining our own water blocks and come up with a pump system inside a computer that didn’t leak and adequately fed enough flow rate into the system to reject that heat out into a radiator and fan system,” he continues. “We’ve worked pretty hard at coming up with the science associated with designing the cooling blocks so that you have a very thin space in between the GPU die and where the water is impinging.”

That work has paid off.

Autonomous Trucks That Keep Their Cool

Real-time data analysis on the next-generation AVC5904 allows autonomous driving systems like the TuSimple ADS to make up to 20 decisions per second when navigating roadways. And it delivers this performance day or night, rain, or shine. This means no supply chain interruptions due to human error or other unfavorably working conditions that could cause delays.

The smart trucking system was put to the test late last year, when a self-driving truck successfully completed its first driverless runs on public roadways. These included an 80-mile, 1-hour-and-20-minute trip between Phoenix and Tucson.

And thanks to the design expertise of Crystal Group, it kept its cool the entire way.

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

This article was originally published on October 27, 2022.

Open RAN Hyperconvergence Brings More IoT Devices to Life

While analysts and researchers have had high hopes and expectations since the arrival of the Internet of Things, adoption and growth is not increasing as much or as fast as anticipated. For example, in 2010 researchers predicted 50 billion devices would be connected to the internet by 2020. Cisco said the same thing in 2011. But as of the end of last year, it was reported that there were in fact only 12.2 billion active endpoints.

It’s not surprising we didn’t reach those lofty goals. After all, how could we expect numbers like that without a coordinated, industry-wide effort around hyperconverged edge infrastructure?

Hyperconverged infrastructure refers to software-defined platforms built on commodity compute, storage, and networking hardware and running virtualized workloads. The concept was originally intended to reduce system cost and complexity in the data center, but its combination of flexible software and common hardware make hyperconvergence a fit for the notoriously varying data capture, analysis, and transmission requirements of IoT use cases.

Despite this, there hasn’t been a concerted industry effort around developing and deploying hyperconverged infrastructure solutions to support IoT at the edge.

How IoT Stopped Standards in Their Tracks

The shortage of hyperconverged edge infrastructure isn’t for lack of trying. More IoT initiatives than you can count have tried to standardize end-to-end architectures. They’ve failed.

The challenge has been joining the IoT edge and data center in a single, unified continuum. Where the data center consists of largely homogeneous, IP-centric communications infrastructure, the edge is built on a diverse array of embedded technologies and connectivity for application-specific sensing and control. To realize the benefits of hyperconvergence, the entire network would have to be redesigned to support the protocols, latencies, and resource constraints of both domains.

Since the initial wave of IoT standardization efforts that targeted the edge, cloud, or both, newer standards have started addressing the boundaries between them. For instance, open radio access network (Open RAN) technologies enable creation of intelligent, virtualized RANs that run on interoperable COTS server hardware.

Even IoT diehards may be unfamiliar with Open RAN technology as it’s being spearheaded by mobile network operators who want to lower costs and accelerate deployment of 5G base stations. But IoT pros should become acquainted with Open RANs because they have the potential to seamlessly bridge the edge and cloud.

Abstracting #NetworkManagement into software lets operators adapt to real-time traffic demands and support features like network slicing that deliver the varying qualities of service (QoS) required by #IoT use cases. @Supermicro_SMCI via @insightdottech

Open RAN Hardware, Flexible Software Unify Edge and Cloud

Instead of specialized, proprietary baseband and radio components, Open RANs run software-defined radios (SDRs) and virtual network functions (VNFs) on top of hardware with open interfaces. Since the hardware is decoupled from network control and routing tasks, modular Open RAN software can be hosted on server platforms available from multiple vendors. The whole stack can then be deployed in disaggregated, distributed 5G architectures.

Abstracting network management into software lets operators adapt to real-time traffic demands and support features like network slicing that deliver the varying qualities of service (QoS) required by IoT use cases.

The abstraction also transforms the underlying server hardware into pools of disaggregated compute and storage resources that can host other workloads next to RAN functionality on the same physical infrastructure. This means public or private IoT networks can integrate software-defined technologies like multi-protocol label switching (MPLS), SD-WAN and secure access service edge (SASE) services, and edge computing in comprehensive, unified wireless edge access equipment.

“You talk about SD-WAN, you have SASE, you have Open RAN. A lot of these capabilities are reusable assets,” says Jeff Sharpe, Senior Director of 5G and Edge AI at Supermicro, an IT technology company. “Instead of putting maybe 20 systems at the edge, why can’t I put five or four or even one to do the capabilities of those additional systems? That’s the type of gear our strong engineering talent is building—more workloads, higher throughput, and higher availability (HA) and NEBS compliance. Not just for the telcos, but really for the IoT sectors as well, whether it’s smart cities or transportation or manufacturing. They all are looking for these heavy assets.”

The Supermicro SuperEdge server portfolio was designed to accommodate high-density, hyperconverged edge networking by integrating three Intel® Xeon® D Scalable processor-based nodes in a short-depth (16.9²), 2U rackmount form factor. The Xeons bring up to 32 power-efficient CPU cores per hot-swappable node to provide up to 50 percent greater compute density than application-optimized servers (Video 1).

Video 1. Super Micro SuperEdge Multi-Node servers pack three Intel® Xeon® D processor nodes in a short-depth, 2U rackmount with 50 percent more density than other servers. (Source: Super Micro Computer, Inc.)

This performance can be extended by adding GPU, VPU, or acceleration cards via three PCIe 4.0 slots available on each node. SuperEdge Multi-Node servers can also be paired with the Intel® Distribution of OpenVINO Toolkit to optimize visual inferencing on computer vision workloads running at the edge.

Hyperconverged IoT Solutions Are Here to Beat Predictions

While it wasn’t developed as an IoT technology, Open RAN architectures are poised to successfully scale more IoT deployments than any IoT standardization effort that’s come before.

The SuperEdge Multi-Node server portfolio is already shipping into the Open RAN community, while other Super Micro customers are leveraging it on-prem for high-end SD-WAN, SASE, and industrial edge inferencing. But the true value of Open RAN is its ability to combine these modular software workloads on a single open interface server like SuperEdge that addresses the full spectrum of IoT and edge networking use cases.

“It can be placed either in the telco networks complementing Open RAN and multi-access edge computing (MEC) capabilities or moved to the edge for industrial applications like private 5G because the multiple nodes enhance separating the core and RAN functions and the actual services on one single system,” Sharpe explains. “So they’re looking at more compute power. Number two, in a very compact form factor. Number three is how do I grow this platform?”

To meet the demand for scalability, Super Micro will be adding four-node SKUs to the SuperEdge Multi-Node portfolio in the near future.

“The market is telling us there is a need for hyperconvergence at the edge,” he adds.

For more information of the SuperEdge products, visit supermicro.com/superedge or Super Micro IoT SuperServer.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Take AI for a Test Drive: Democratizing ML with MindsDB

Machine learning has become a crucial component of a data management strategy—particularly with the huge influx of data from IoT devices these days—but it can be challenging to sift through all that information. An additional challenge is the dearth of available machine learning (ML) experts. But there are businesses out there working to democratize sophisticated ML models, making it easier and more efficient for anyone to deploy them.

Machine learning solution provider MindsDB is one of those companies, and Erik Bovee, its Vice President of Business Development, wants to encourage new members of the ML community to get started. He talks to us about challenges of ML adoption, learning to trust the model, and bringing machine learning to the data rather than the other way around.

What is the state of machine learning adoption today?

The amount and complexity of data are growing really quickly, outpacing human analytics. And machine learning is hard, so finding the right people for the job is difficult. But in terms of the state of the market there are a couple of interesting angles. First, the state of the technology itself is amazing—just the progress made over the past five to 10 years is really astonishing—and cutting-edge machine learning models can solve crazy-hard, real-world problems. Look at what OpenAI has done with its GPT-3 large language models, which can produce human-like text. There’s also Midjourney, which, based on a few keywords, can produce really sophisticated, remarkable art.

From an implementation standpoint, though, I think the market has yet to benefit broadly from all of this. Even autonomous driving is still more or less in the pilot phase. Adapting these capabilities to consumer tech is a process, and all kinds of issues need to be tackled along the way. One is trust. Not just, “Can this autonomous car drive me safely?” But also, “How do I trust that this model is accurate? Can I put the fate of my business on this forecasting model?” So I think those are important aspects to getting people to implement machine learning more broadly.

But there are a few sectors where commercial rollout is moving pretty fast, and I think they’re good bellwethers for where the market is headed. Financial services is a good example—big banks, investment houses, hedge funds. The business advantage for things like forecasting and algorithmic trading is tremendously important to their margins, and they’ve got the budgets and a traditional approach to hiring around a good quant strategy. But a lot of that is about throwing money at the problem and solving these MLOps questions internally, which is not necessarily applicable to the broader market.

I also see a lot of progress in industrial use cases, especially in manufacturing. For example, taking tons of high-velocity sensor data and doing things like predictive maintenance: What’s going to happen down the line? When will this server overheat? I think those sectors, those market actors, are clearly maturing quickly.

“One of our goals is to give #DataScientists a broader tool set, and to save them a lot of time on cleanup and operational tasks, allowing them to really focus on core #MachineLearning.” – Erik Bovee, @MindsDB via @insightdottech

How does democratizing AI give business stakeholders more trust?

A lot of that starts with the data—really understanding your data, making sure there aren’t biases. Explainable AI has become an interesting subject over the past few years. One of the most powerful ways of getting business decision-makers on board and understanding exactly how the model operates is providing counterfactual explanations—that is, changing the data in subtle ways to get a different decision. That tells you what’s really triggering the decision-making or the forecasting on the model, and which columns or features are really important. 

What are some of the machine learning challenges beyond skill set?

Skill set, I think, is a challenge that will diminish over time. What is often challenging is some of the simple things, some of the simple operational things in the short term on the implementation side. The data scientist tool set is often based on Python, which is arguably not very well adapted to data transformation. There’s often this bespoke Python code written by a data scientist—but what happens to it when your database tables change? It’s all reliant on this one engineer to update everything over time. So how do you do something that is efficient and repeatable, and also predictable in terms of cost and overhead over time? That’s something we’re trying to solve.

One of the theories behind our approach is to bring machine learning closer to the data, and to use existing tools like SQL, which is pretty well adapted to data transformation and manipulating data. Why not find a way to apply machine learning directly—via connection to your database—so you can use your existing tools and not have to build any new infrastructure? I think that’s a big pain point.

How does this benefit data scientists?

One of our goals is to give data scientists a broader tool set, and to save them a lot of time on cleanup and operational tasks, allowing them to really focus on core machine learning. You’ve got data sitting in the database, so, again, why not bring the machine learning models to the database? And we’re not consuming database resources either; you just connect MindsDB to it. We read from the database and then pipe machine learning predictions back to the database as tables, which can then be read just like any other tables you have. There’s no need to build a special Python application or connect to another service; it’s simply there. It cuts down considerably on the bespoke development, is very easy to maintain in the long term, and you can use the tools you already have.

How does this compare to traditional methods of deploying machine learning models?

Traditionally you would write a model using an existing framework, like TensorFlow or PyTorch, usually writing in Python. You would host it somewhere. And then you would have data you want to apply—maybe it’s in a data lake, or in Snowflake, or in MongoDB. You write pipelines to extract that data and transform it. You often have to do some cleaning, and then data transformations and encoding. The model would spit out some predictions, and then perhaps you’d have to pipe those back into another database, or feed them to an application that’s making decisions. That’s the way it’s been done in the past.

MindsDB, on the other hand, has two components. One is a core suite of machine learning models that are adapted to different problem sets. MindsDB can look at your data and make a decision about which model best applies, and choose that. The other possibility in this component is that you can bring your own model. If there’s something you particularly like you can add that to the MindsDB ML core using a declarative framework.

The other piece of MindsDB is the database connector—a wrapper that sits around these ML models and provides a connection to whatever data source you have. It can be a streaming broker; it can be a data lake; it can be an SQL-based database where MindsDB will connect to that database. Then, using the native query language, you can tell MindsDB, “Read this data and train a predictor on this view or these tables or this selection of data.”

What is the benefit of using MindsDB?

I think it’s important to make this really clear: We are not replacing anybody. For an internal machine learning engineer or a data scientist, MindsDB just saves a tremendous amount of the work that goes into data wrangling, cleaning, transforming, and coding. Then they can really focus on the core models, on selecting the data they want to train from, and then building the best models. So the whole thing is about time saving for data scientists.

And then, in the longer term, if you connect this directly to your database, you don’t have to maintain a lot of the ML infrastructure. If your database tables change, you just change a little bit of SQL. You can set up your own retraining schema. It all saves a data scientist tons of time and gives them a richer tool set. That’s our goal.

Can you provide some examples of use cases?

We really focus on business forecasting, often on time-series data. Imagine you’ve got something like a retail chain that has thousands of SKUs—thousands of product IDs across hundreds of retail shops. Maybe a certain SKU sells well in Wichita but doesn’t sell well in Detroit. How do you predict that? That’s a sticky problem to solve, but it also tends to be a very common type of data set for business forecasting.

One very typical use case we have is with a big cloud service provider, where we do customer-conversion prediction. It has a generous free-trial tier, and we can tell it with a very high degree of accuracy who’s likely to convert to a paying tier, and when. We’re also working with a large infrastructure company on network planning, capacity planning. We can predict fairly well where network traffic is going to go, where it’s going to be heavy and not, and where the company needs to add infrastructure.

One of our most enjoyable projects, one that’s really close to my heart, is working with a big e-sports franchise, building forecasting tools for coaching professional video game teams. For example, predicting what the other team is going to do for internal scrimmages and internal training. Or what would be the best strategy given a certain situation on MOBA games like League of Legends or Dota 2? It’s an exotic case now, but I guarantee it’s one that’s going to grow in the future.

Where is the best place for a business to start with machine learning?

Super easy: Cloud.mindsdb.com. We have a free-trial tier, and it’s super easy to set up. Wherever your data’s living, you can simply plug MindsDB in and start to run some forecasting—do some testing and see how it works. You can take it for a test drive immediately. The other thing is to join our community. At MindsDB.com we’ve got a link to our community Slack and to GitHub, which is extremely active, and you can find support and tips there.

How are you working with Intel®, and what has been the value of that partnership?

Intel has been extremely supportive on a number of fronts. Obviously, it has a great hardware platform, and we have implemented their OpenVINO framework. We’ve made great performance gains that way. And, on top of that, Intel provides tons of technology and go-to-market opportunities.

Any final thoughts or key takeaways to leave us with?

Go test it out. MindsDB is actually pretty fun to play with—that’s how I got involved. If you take it for a test drive, provide feedback on the community Slack. We’re always looking for product improvements and people to join the community.

Related Content

To learn more about democratizing AI, listen to the podcast Machine Learning Simplified: With MindsDB. For the latest innovations from MindsDB, follow them on Twitter at @MindsDB and on LinkedIn.

 

This article was edited by Erin Noble, copy editor.

The Future of AI in Healthcare Is Already Here

Over the past few years, the healthcare industry has seen major changes. Telehealth appointments have reached mainstream adoption, and more and more patients can get the care they need from the comfort of their home. Remote patient monitoring also has advanced, giving doctors deep insight into patient data.

And this is just the beginning of the new and innovative services and solutions both patients and healthcare providers can expect with remote care.

Healthcare Beyond the Office and Screen

Before the pandemic, telehealth accounted for only 5% of remote encounters. Today, that number is up to about 80%, according to Dr. Richard Bakalar, Chief Strategy Officer for ViTel Net, provider of scalable virtual care solutions.

What those numbers told Bakalar was that many providers didn’t have the technology, training, or support infrastructure in place to successfully deploy telehealth solutions. So ViTel Net decided to create a solution that would not only allow hospitals and clinics to adopt telehealth but integrate it into their entire system. Instead of piecemealing services together, ViTel Net has created a platform that connects disparate systems and streamlines databases and workflows.

Using its cloud-based vCareCommand model platform, along with the high-compute Intel® processor-based solution stack, the company can integrate with existing information systems and collect all necessary patient data in one place for continuity of care.

As #technology continues to advance, and providers and patients become more comfortable using it, we are going to see technology use cases expand. @intel via @insightdottech

AI and CV Improve Medical Diagnosis

Being able to have all the proper data in the right place enables providers to accurately treat their patients. And with advances in artificial intelligence and computer vision, they can also get more support when it comes to diagnosis.

As you can imagine, having access to all types of patient data takes time and effort for doctors to sift through and detect anomalies. Leveraging AI and computer vision, systems can automatically point out any issues for faster diagnostics.

And going even further, companies such as Aireen, a provider of AI-based screening medical devices, use solutions like the Intel® Distribution of OpenVINO Toolkit to provide early diagnosis and treatment options.

For example, Aireen is currently helping physicians and optometrists screen patient retina images to enhance medical diagnostics. Its solution is trained on more than 1.5 million fundus camera images to provide 99 percent sensitivity when analyzing a retina image.

Another example is from the network hardware and edge server producer AEWIN Technologies Co., Ltd., which leverages OpenVINO to analyze patient Low-Dose Computed Tomography images and improve screening efficiency. This can help detect suspected illnesses such as cancer much earlier—allowing patients to get diagnosed and treated as soon as possible.

These advances are not meant to replace medical expertise by any means. Instead, they are designed to complement it and free up practitioners to develop better treatment plans.

AI Solves Medical Staffing Issues

Part of the reason solutions from Aireen and other companies are becoming more desirable is because of the global medical worker shortage. Today’s limited healthcare staff is overburdened, and patients are unhappy when they show up for a doctor’s appointment only to be met with a crowded office and long wait.

Technology can help ease the burden on medical workers by providing AI-powered self-service kiosks to reduce wait times for patients and take pressure off staff.

For instance, Imedtac, a provider of IoMT technology solutions, has developed Smart Vitals Sign Station, an alternative to measuring vital signs. Traditionally, before a patient would see a doctor, a nurse would come take their vitals. But with Imedtac, self-service kiosks can measure a patient’s height, weight, temperature, heart rate, and blood pressure—not only allowing staff to focus on more important matters, but also preventing manual errors from happening. Intel processors are key to the solution’s ability to provide reliable and accurate services.

The Future of Healthcare

As technology continues to advance, and providers and patients become more comfortable using it, we are going to see technology use cases expand.

congatec AG, a leading supplier of embedded computer modules, already brings AI into new areas like the operating room by enabling robots to perform automated tasks such as suturing wounds.

From improving operations, diagnostics, and treatment plans to providing patients in rural areas with better care, the opportunities for the future of healthcare are practically endless.

Learn how you can be part of the change by checking out the Intel Edge AI Certification Program or taking the 30-Day Dev Challenge.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.