Virtual Collaboration in Ultrasound Advances Clinical Care

We often view healthcare through the lens of how it improves the patient-provider relationship—whether it’s timely access to a patient’s medical history or telehealth checkups. What we don’t see is the care provider collaboration, mentorship, and training that go on behind the scenes.

In the wake of the pandemic, healthcare systems have dealt with ongoing staffing capacity and personnel shortages. Burnout has contributed to higher turnover. “This can lead to hospitals and clinics not having the right number of staff or the expertise for a variety of exam types,” says Eddie Henry, Global Marketing Director for Ultrasound Digital Solutions at GE Healthcare, a leading provider of healthcare technologies.

A healthcare system may have multiple locations across a city or state, which expands access but also poses challenges. This means collaboration and knowledge-sharing have become vital, especially among more experienced providers and those early in their careers. Newer clinicians may encounter difficult patient cases or exam types, and in these moments, they could greatly benefit from talking to and learning from experienced colleagues. But these experts aren’t necessarily co-located with the clinician on the front line. New technologies can solve these collaboration challenges by providing a seamless, secure way for technicians and other clinicians to connect—anywhere, anytime.

Connecting Clinicians and Delivering a Virtual Ultrasound Experience

The GE Healthcare Digital Expert Connect solution brings sonographers, physicians, and remote care providers together to learn from one another and better serve patients. The virtual, interactive, peer-to-peer collaboration platform drives precision health by empowering clinicians to connect with colleagues in real time to get their questions answered, improve clinical decision-making, and deliver more coordinated, personalized patient care.

Digital Expert Connect allows users of GE ultrasound equipment to connect virtually with peers within their network. The HIPAA-compliant collaboration tool allows clinicians to work together on a patient’s case—all from a tablet powered by Intel®.

They can share their screen so a colleague can see exactly what they see in an ultrasound. The platform’s live annotation features also allow remote clinicians to provide real-time feedback on a patient’s case. Through the interactive interface, providers can easily communicate with colleagues, ask for advice, and get an expert’s opinion about a particular exam or scan.

Henry says using Digital Expert Connect for Ultrasound can benefit clinicians in several ways. Sonographers working for a healthcare system spread across a wide geographic area can use the tool to get after-hours support from a lead sonographer or even a physician, walking them through a particular exam and answer their questions (Video 1).

https://www.youtube.com/watch?v=bnKlr1sYz90

Video 1. Virtual, real-time collaboration and training for ultrasound clinicians help improve clinical workflow. (Source: GE Healthcare)

“I’ve heard directly from sonographers saying, ‘Sometimes I feel like I’m in that room with a patient, but I feel alone,’ so having that ability to connect with someone really quickly and really discreetly just helps their confidence level.”

The tool also allows sonographers to connect with OB/GYNs, radiologists, cardiologists, and other providers who have requested patient scans. They can work together with the exam images that they need, which ultimately saves physicians time when they read or interpret these images post-exam.

Jay Hanrahan, Global Product Manager for Ultrasound Digital Ecosystems at GE Healthcare, says Digital Expert Connect also can help healthcare systems reduce exam errors and avoid re-scans.

“Let’s say a sonographer does a scan of a patient and moves it to an IT system where a radiologist can view it. They look at those results and say, ‘We didn’t get what we needed to see, so we’ll need to get the patient back in for a re-scan.’ If you can have that communication occur during the scan, you can make that correction in real time. The patient doesn’t have to come in a second time and the whole process is more efficient because of that,” Hanrahan says.

Driving Change Across the Healthcare Continuum

Digital Expert Connect can help clinicians who must support a large geographical area. Within the system there typically would be a central hub where most of the clinical expertise is consolidated. At other sites, some of which are rural, clinicians may have complementary expertise and skills.

“The platform elevates the capabilities of those rural centers, because you’ve got these experts that are able to be there virtually,” says Hanrahan. “It also improves the satisfaction and the life of the patients because they can get that care locally rather than having to go to the big city every two weeks, which can be very disruptive to their family life.”

Digital Expert Connect is improving clinical workflows and making healthcare systems more efficient as they deal with ongoing staffing capacity challenges. Even more important, the solution drives collaboration and a virtual ultrasound experience that can lead to improved clinical outcomes and more personalized care. Clinicians shouldn’t have to travel miles to collaborate with their colleagues, and patients shouldn’t have to do the same to access the clinical expertise they need.

“We’re helping our customers, whether it’s a sonographer, radiologist, cardiologist, or someone in the women’s health space do what’s absolutely right for and best for their particular patient,” Hanrahan says.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Getting the Smart Factory to 20/20 Machine Vision

In the past couple of years, manufacturers have been under a lot of pressure to streamline their operations. One way to do that is through the transformation to smart factory. That in itself can mean a lot of things, one being the use of camera systems for machine vision. Throw AI into the machine vision solution, and that solution could seem more intimidating than the problem, particularly without data scientists or AI developers on hand.

David Dewhirst, Vice President of Marketing at Mariner, a provider of technology solutions leveraging IoT, AI, and deep learning, breaks the situation down for us. David spotlights Mariner’s crucial area of expertise—harnessing machine vision and AI for quality assurance on the factory floor, because, as he points out, quality needs to be paid for one way or another. And paying for it up front, on the factory floor, sure beats paying for it in a reputation for shoddy product.

What does it mean to be a smart factory?

I like to draw a distinction between data and information. Data is just the inputs—and they’re everywhere. You need to somehow transform that data so that you can do useful things with it as information. When I’m thinking about a smart factory, or a connected factory, I’m thinking about all of the information that’s inherent on the factory floor. So how do you connect all the data together? And how do you process that data to get useful results out of it—to get information? As well as availing yourself of new sensors and technology to really advance the state of the art in manufacturing.

How are manufacturers doing on this journey to becoming smart factories?

In fact, there is a high project-failure rate in this space. But you have to do it anyway, because all of your competitors are doing it. If you don’t, you’re going to be left behind.

In my observation, when these projects fail it’s because manufacturers haven’t actually thought through what they’re trying to do. They know they need to do this cool thing, but they may not necessarily be doing it to solve a specific problem. But that’s how I think any smart-factory initiative should proceed. If you’re charged with digital transformation in your factory, find the use case that may not be the coolest thing that you can do, but that solves the biggest, hairiest problem. Our solution is very pointedly aimed at improving defect detection in factory, so that’s one kind of use case.

It’s also important to find those use cases where you can sell your project both below and above—to the engineers who are impacted by it, but also to the decision-makers who cut the checks. And then you’ll be on a clear middle path towards smart factory. Clearly identifying your use case will help you sell it, and it will also help you solve it; if it’s a defect-detection problem, you can go looking for companies like Mariner that specialize in that. And from there, maybe you’ll identify other use cases that you can tackle later on.

The best way to start identifying these use cases is to talk to the people who have the problems. Talk to the people on the factory floor, the engineers—the boots on the ground. They will often be aware of day-to-day problems; they may even be suppressing problems, or just trying to ameliorate problems that they would love to have a solution for if you just asked them. Also talk to the people above you. Say to them, “What is costing us money?”

What’s the importance of machine vision to the smart factory?

When we talk about machine vision or camera systems or computer vision in the factory setting, those are typically fixed cameras in a fixed position with a fixed type. They are very bespoke to the production line. They will be designed in their placement, their lighting, their setup, in order to be targeted to the specific product on that production line. Their importance is in their ability to improve the quality control process.

There is the concept of total cost of quality, right? You’re going to spend money on your factory floor to have good quality that goes out the door. Or, if you don’t do that, you’re going to have a lot of returns, and you’re going to have warranty claims. Not spending money on the quality costs on the factory floor means you’re still going to spend money on quality costs; it’s just going to be in canceled contracts and bad brand association.

“If you’re charged with #digital transformation in your #factory, find the use case that may not be the coolest thing that you can do, but that solves the biggest, hairiest problem.” – David Dewhirst, @MarinerLLC via @insightdottech

The cheapest, highest ROI way to pay this cost is to do the quality work on the factory floor. This isn’t a new concept. Ever since the first assembly line in Dearborn, Michigan, you’ve had guys at the end of the line looking at products and doing quality control. Machine vision systems, or camera systems, to help do that have been around for decades. They are useful because they can present a very consistent look from piece to piece, from part to part to part to part. It always looks the same because the camera, as I said before, is very fixed and situated.

How does AI help take this process to the next level?

For the past several decades, machine vision systems have been very good at solving binary problems. For example, is there a hole in this piece, or is there not a hole in this piece? That’s a binary thing: yes or no. It’s very easy using traditional programming, which relies on those true/false questions to come up with a true/false answer.

But what happens when your problem isn’t binary? What happens when, instead of just asking is it a hole or not a hole, what happens when you’re looking at, for example, is this an oil stain on fabric or is it a piece of lint? They’re both kind of fuzzy. Maybe the stain is a little bit fuzzier and the lint is less fuzzy, but you have to draw an arbitrary line between the fuzziness levels. Then what happens if there is lint that is a little bit fuzzier than where you drew the line? That gets called defect. What happens if the stain is a little less fuzzy than you thought it would be? That will escape, because you might think that it’s lint. That’s where AI comes in.

With machine learning, with deep-learning techniques, you don’t need to draw an arbitrary line for a true/false answer. You can just train the AI with enough samples of stains and lint, and the AI will learn on its own what the difference is. AI can solve those kinds of challenges that weren’t really solvable before with just traditional programming, so you can oftentimes get your machine vision system, your camera system, to do what you hired it to do and what it has never really done a good job at.

What can manufacturers do if they have a lack of IT or AI support?

At Mariner, we use a tool. We ask your quality guys to take all the images you have of your product that show defects, upload them to the tool, and draw a little box around them. That lets your quality guys do what they’re good at—looking at these images and pointing out defects. We can take advantage of that and then do the part we’re good at, which is the data science. Our data scientists will build that AI model so you don’t need data science guys on the factory floor. We’ve got you on that.

Other companies with other solutions and other spaces will ship prebuilt models. Those may or may not work, depending on how closely those prebuilt models match what your particular situation is on the factory floor.

Where is all the data collection and processing happening—the edge or the cloud?

It depends. If you have 10,000 sensors all over your factory and you’re generating terabytes of information, you’re going to have to do it in the cloud. In machine vision there’s a little bit less reliance on the cloud. Mariner, with our Spyglass Visual Inspection solution—SVI—actually uses a hybrid solution. And that’s because, for the real-time defect-detection work, we don’t have time to make a round trip to the cloud. We’re doing our actual defect detection and the AI-inference work on the factory floor because then, even if you lose internet connection, your production isn’t shut down, your factory isn’t shut down.

We do also make use of the cloud. SVI is designed to run headless, without anybody standing around, but engineers can go back and review the decisions that the AI has made. If the AI got something wrong, the engineers can correct it. That will go up to the cloud. And if the AI model needs to be retrained, we can do that in the cloud because it doesn’t require real-time connectivity.

How do you work with other partners in this ecosystem to make it all come together?

Number one, we don’t sell cameras; we are an AI software-as-a-service solution. If you need cameras, we work with a vision integrator that will get you the right camera. By and large, we don’t care what the camera is; we can make use of any camera you already have, or work with you to get one.

Partner number two, because we need some powerful processing capabilities, we work very closely with Intel® and Nvidia, both on the factory floor. We ship AI software as a service that, ironically, will arrive to you on a server box. We do that because then we can build those server boxes to do what we want. So we have Intel® Xeon® chips in there for really muscular, beefy processing, and we have Nvidia cards in there for extra GPU power.

We also partner on the cloud with Microsoft, typically in Azure. There are a lot of prebuilt services and other capabilities in Azure that we can make use of, and also be certain about security and speed and all those other important things.

Anything else you would like to add?

You may not need Mariner’s solution, but you will need to move forward with industrial IoT and AI. Actually, you may or may not need AI, given your use case, but you are going to need to have industrial IoT of some kind. Mainly I would encourage people to think about the use cases and the situations that are right for them. Find that hook, get in, and don’t be the last guy.

Related Content

To learn more about defect detection, read A Guaranteed Model for Machine Learning and listen to Product Defect Detection You Can Count On: With Mariner. For the latest innovations from Mariner, follow it on Twitter at @MarinerLLC and LinkedIn at Mariner.

 

This article was edited by Erin Noble, copy editor.

AI-Assisted Diagnostics: The Future of Cancer Detection

For cancer patients, getting a swift and accurate diagnosis is critical for their prognosis—and peace of mind. But if the screening is done by an endoscopy, the process is more complex. Typically, doctors look for lesions with specialized cameras, but limitations leave the door open to oversight and errors. In fact, about 25% of all colorectal neoplasms, or cancerous tumors, are missed by experts using this standard process.

Today, those same cameras are being enhanced with AI and machine learning technology, helping improve patient outcomes. The solution leverages capture cards typically used in video gaming to enable high-resolution graphics that can be displayed on screens in real time. The crisp images are paired with machine learning data that can help doctors identify tumors faster, speeding up a patient’s path to treatment.

“Endoscopy is the base case for AI Analysis because doctors want to understand what is happening inside your body,” says Evelyn Tsai, Marketing Manager for Wincomm Corporation, a provider of industrial and medical grade computer products. “Previously, a doctor would need to have the patient come back to the hospital for more observation and tests. Through AI-assisted diagnostics, they can diagnose in real time whether or not something is neoplasm.”

The AI-Powered Medical Panel PC for New Endoscopic System, called EndoBRAIN, was first deployed in a hospital in Japan. Its speed and accuracy can reduce the costs, time, and risks associated with biopsies and repeated colonoscopies, saving patients the high level of discomfort that would come from enduring multiple procedures. EndoBRAIN can also improve patient diagnostics at remote rural healthcare facilities that are often lacking experienced professionals, who tend to work in large urban hospitals.

“AI-assisted diagnostic doesn’t replace the doctor’s decision; it supports their decision by a prediction training model,” says Tsai. “Experienced doctors may be able to detect neoplasms easier than younger doctors. Through this kind of system, doctors don’t need to rely on years of practice because the computer has learned the experience and can assist with a diagnosis.”

Development Partnership Leads to Innovation

To make such a technology operate properly, different disciplines must work together. For example, the medical specialists must work closely with the embedded technology experts to develop the solution.

One such case is where the Wincomm engineers worked closely with the team at CYBERNET SYSTEMS CO., LTD., subsidiary of FUJISOFT, to develop EndoBRAIN and EndoBRAIN-EYE—tools that deploy AI to detect and analyze colorectal polyps and other lesions in an endoscopy.

The system is integrated with the Wincomm high image processing Medical Panel PC platform and an Olympus endoscope, which was key to getting the regulatory approvals required to bring it to market. In addition to compute performance and design flexibility, the panel PC’s antibacterial design protects against airborne diseases. And the system’s built-in safety protects patient data by avoiding equipment damage from signal and voltage feedback loops.

“While #technology can’t replace the judgment that #healthcare professionals provide, #AI does have the power to impact and enhance the industry in important ways.” – Evelyn Tsai, @WincommCorp via @insightdottech

AI and Machine Learning Improve Diagnostic Accuracy

EndoBRAIN is the endoscopic microscope used to photograph the inside of the patient’s large intestine, as well as the AI software that determines the presence of colorectal cancer using image analysis technology. After “learning” with 60,000 medical records, the tool’s sensitivity rate is 96.9% and its accuracy is 98%, comparable to senior specialists. Using AI to automatically judge key parts of the image enlargement, the diagnosis is brief, reducing patient discomfort and the scheduling and training burden placed on hospital staff.

AI inferencing at the edge—enabled by the Intel® OpenVINO toolkit—is key to providing real-time data required for diagnosis.

“To make the medical imaging process perform smoothly, it must be low latency, almost real time,” says Tsai. “The doctors need to see the screen at the same time they watch the analyzed data. Doctors have different techniques, such as moving the imaging faster or slower. The system must also be able to fit the doctors’ behavior. Intel processors provide the powerful computing performance, high-resolution graphics, and an architecture can support these requirements.”

The Future of AI in Healthcare

Wincomm is also helping systems integrators expand the Medical Edge AI solution to other use cases. The medical edge AI computer platform, powered by Intel® Core processors, can work with a range of servers, camera control units, and medical imaging solutions. In addition to endoscopies, the technology supports a wide range of use cases such as robotic surgeries, ultrasounds, ECGs, and x-rays.

While technology can’t replace the judgment and care that human healthcare professionals provide, AI does have the power to impact and enhance the industry in important ways, says Tsai. First, AI can assist in providing a more accurate diagnosis, such as with the endoscopy solution. Data analytics can be especially impactful for new physicians who haven’t yet collected years of experience.

Second, AI tools can enable intelligent monitoring solutions, which can collect data and save time and resources. For example, patients with a non-critical diagnosis can be cared for remotely, freeing up hospital beds for those with more serious conditions. In addition, AI can help nurses monitor patients from a central location, saving time while maintaining care. And AI solutions can speed up deployment of advanced healthcare to a wider geographic region, including rural facilities that may have a harder time competing for top care providers.

“Edge AI is going to dramatically shift the healthcare industry in the future,” says Tsai. “The cost, time and healthcare service quality will improve, and patients can get faster care.”

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

SIs Boost Customer Engagement with Retail Analytics

Knowing what their customers like, what they don’t like, and what piques their interest in real time could be invaluable for retailers, helping inform campaigns, displays, and content. But it seems like you’d need to be a mind reader to gather this information. While loyalty programs provide some information, brick-and-mortar retailers, and the systems integrators that serve them, have had to rely on intuition alone when selling to customers.

For years, online retailers have had an advantage over physical stores because they can gather analytics from visitors’ browsing activity. But AI technology is leveling the playing field. With the help of systems integrators, luxury brands use edge technology to gather and leverage analytics inside physical stores. These insights are used to tell stories that help improve the customer experience, adapt to the changing market, and grow sales. And the timing is perfect, as shoppers return to stores after the pandemic had sent many online.

“Despite one-click shopping, 90% of sales are still transacted in stores,” says Luigi Crudele, CEO of Wonderstore, a manufacturer of AI solutions for the retail industry. “That means retailers have plenty of information to gather about visitors to their locations. Stores are the main place to create high-level relationships with the consumer based on involvement.”

For example, one fashion brand used insights collected by Wonderstore to pinpoint the best day and time to launch its new accessory line. By understanding its customers’ behavior, it was able to increase accessory revenue by 30% the day the merchandise was available in the store.

Another luxury fashion brand uses Wonderstore data analytics to uncover an average of 20% transaction variation between stores. It also found a 15% difference in conversion rates between its highest- and lowest-performing locations. Managers used this information to better understand regional customer profiles and the success rate of sales tactics.

AI Retail Technology Creates a Smart Store

To create a Wonderstore IoT retail analytics solution, Crudele drew upon his wealth of experience in storytelling and branding. His first company created 3D computer animation for video games, and he later launched an agency for developing interactive digital brand experiences, working with Italian brands like Tiscali.

“Brands spend millions of dollars in advertising campaigns and super-shop windows,” says Crudele. “We measure the effectiveness of those messages. With our solution, retailers can measure performance, understand conversion rates, and improve their investments.”

Wonderstore uses sensor technology to collect data about the in-store customer journey. Using computer vision technology, the solution can count, track, and analyze customers, collecting data that includes gender, age, and even emotions. It can measure dwell time, visitor flows, and browsing patterns. The data can be very granular, measuring performance of every single point of interest in the store, such as shop windows, entrances, shelves, fitting rooms, mirrors, and point of sale.

The company relies on the latest computer vision storytelling technology, with best-in-class IoT sensors from Intel®. Meeting GDPR regulations, sensors collect anonymous data that is sent to the cloud to be analyzed and transformed into actionable information. The solution is fully developed on Microsoft Azure architecture and cloud services. Using a storytelling data visualization platform, data is immediately readable, allowing the retailer to make decisions in near-real time.

Retail Technology Partnerships Provide Scale

“Partnerships help create awareness and trust in the marketplace. Wonderstore chose Microsoft and Intel to align with their vision of the cloud and IoT services,” says Crudele. “Through these relationships, Wonderstore was able to quickly enter the market with a prototype, show the product to the customer, and build a business with the top luxury brands.”

Wonderstore also partners with Tech Data Corporation, an IoT solutions aggregator, which provides immediate scale to its business as well as awareness and trust in the marketplace.

“We are a startup and Tech Data is an international IT distributor,” says Crudele. “Tech Data transformed our team from two people selling our product to thousands of resellers across Europe. The company is helping us change our business model from delivering technology to a solution. Our customers are no longer the retail brand but the partner. This paradigm is allowing us to scale up our business more easily and faster.”

By leveraging data, Wonderstore is helping create smart stores that can adapt to the customer and understand their needs from the moment they enter the store—not just at checkout.

“The store of the future will be a place where customers will have personalized services with creative brand experiences that entice them to buy,” says Crudele. “And it’s important for retailers to move from a sales model to a service model. Recognizing and understanding their customers with the same precision as Google Analytics helps create an experience that’s more than just a mere transaction.”

About TD SYNNEX

TD SYNNEX (NYSE: SNX) is a leading global distributor and solutions aggregator for the IT ecosystem. We’re an innovative partner helping more than 150,000 customers in 100+ countries to maximize the value of technology investments, demonstrate business outcomes, and unlock growth opportunities. Headquartered in Clearwater, Florida, and Fremont, California, TD SYNNEX’s 22,000 co-workers are dedicated to uniting compelling IT products, services, and solutions from 1,500+ best-in-class technology vendors. Our edge-to-cloud portfolio is anchored in some of the highest-growth technology segments, including cloud, cybersecurity, big data/analytics, IoT, mobility, and everything as a service. TD SYNNEX is committed to serving customers and communities, and we believe we can have a positive impact on our people and our planet, intentionally acting as a respected corporate citizen. We aspire to be a diverse and inclusive employer of choice for talent across the IT ecosystem. For more information, visit www.TDSYNNEX.com or follow us on TwitterLinkedInFacebook, and Instagram.

The Full Scope of Deploying Industrial AI at the Edge

The smart-manufacturing space has been evolving rapidly over the past few years so as to keep up with the demands of the digital era. Edge computing is a big part of that digital-transformation journey. But edge isn’t a fixed destination; it’s part of the process.

But businesses may still need signposts on this journey. So who has the roadmaps? And how might those businesses know when they’ve actually arrived where they need to be? Blake Kerrigan, General Manager of the Global ThinkEDGE Business Group at Lenovo, a global leader in high-performance computing, and Jason Shepherd, Vice President of Ecosystem at ZEDEDA, a provider of IoT and edge-computing services, confirm that there’s no one-size-fits-all approach.

They discuss the orchestration of edge computing, bringing what’s best about the public cloud experience right to the edge, and the relationship between cloud and edge computing in the first place.

What does a digital transformation in the manufacturing space look like these days?

Blake Kerrigan: For the past 15 to 20 years most industrial customers have been focused on automation, but some of the biggest trends we’re seeing now are around computer vision and AI use cases. Other trends I’m seeing a lot in manufacturing and distribution are things like defect detection and safety applications.

The question is: How do you create efficiencies in the processes that already exist? We’re starting to see unique solutions, and they’re getting easier and easier for our customers to adopt.

How does this change the role of edge computing and the cloud?

Jason Shepherd: The only people who think that sending raw video directly to the cloud is a good idea are the ones who want to sell you internet connectivity. With computer vision, the whole point is to be able to look at live camera or video streams at the edge, where they can be continuously monitored, and intelligence can be built in to trigger human intervention if needed.

What’s the most successful way for manufacturers to navigate this edge journey?

Jason Shepherd: Edge is a continuum—from really constrained devices up through on-prem. Eventually you get to the cloud, and running workloads across that continuum is a balance of performance costs, security, and latency concerns.

For manufacturers, first and foremost, it’s important to understand that it is a continuum, then to understand the different trade-offs. If you’re in a secure data center, it’s not the same as being on the shop floor—the security needs are different, for example. Navigating the landscape is the first problem.

When you get into actual deployment, always start with a use case, then do a POC. At this stage we see a lot of experimentation. But taking the lab experiment into the real world can be really challenging—camera angles change, lighting changes, contexts switch, etc.

The main thing is to break down the problem, and separate out infrastructure investment from investment in the application plane. Work with vendors that are architecting for flexibility, and evolve from there. Eventually it comes down to domain expertise with consistent infrastructure—consistent infrastructure like we’re doing with Lenovo and ZEDEDA and Intel®.

Blake Kerrigan: You can build something in a lab, and typically the last thing an engineer’s going to think about is the cost of developing or deploying the solution. The biggest inhibitors to scale are deployment, management of life cycle, and transitioning from one silicon to another over time.

The first step is understanding what kind of business outcome you want to drive, and then being conscious of what costs are associated with that outcome. To select the right hardware, the customer has to understand what the iterations of the program are throughout the solution’s life cycle. At Lenovo, we work with people on solution architecture and thinking about what type of resources they need today—and then how does that scale tomorrow, next week, and next year, and the next five years?

Tell us more about how to approach edge computing.

Jason Shepherd: There are a lot of special, purpose-built vertical solutions. With any new market, I always say, it goes vertical before it goes horizontal. It’s about domain knowledge.

What’s new is that everything is becoming software defined—where you abstract the applications from the infrastructure. In the manufacturing world, control systems have historically been very closed, which is a play to create stickiness for that control supplier. And, of course, there are implications around not being tightly controlled in terms of safety and process uptime.

What’s happening with edge is that we’re able to take public cloud elements—platform independence, cloud-native development, continuous delivery of software that’s always updating and innovating—and we’re able to shift those tools back to the edge. Basically, we’re taking the public cloud experience, and extending it right to the box on the shop floor.

What we do at ZEDEDA is that—while we help expand those tools from a management standpoint, from a security standpoint—we also have to account for the fact that even though the same principles are in play, it’s not happening in a physically secure data center. When you’re in a data center, you have a defined network perimeter; if you’re not, we have to assume that you’re deployed on untrusted networks. Also, when you’re outside of the data center, you have to assume that you’re going to lose connectivity to the cloud at times, and you’ve got to be able to withstand that. One-size-fits-all doesn’t come into play here.

So when should you use the cloud versus the edge?

Blake Kerrigan: The cloud means different things to different people. At Lenovo we feel that, ultimately, edge will essentially become an extension of the cloud. Edge computing is all about getting meaningful data to either store, or to do more intensive AI on; what we’re trying to do is to comb down the amount of uneventful or un-insightful data.

There are really two main things to consider. The first one is orchestration: How can I remotely create and orchestrate an environment where I can manage applications off the site? And the second one is—to make these models better over time—doing the initial training. Training is a big part of AI and computer vision, and one that’s woefully underestimated in terms of the amount of resources and time it takes. One of the most effective ways to do it is in collaboration in the cloud.

Let’s use defect detection as an example. Let’s say you have 50 different plants around the United States, and every single one of them has a defect-detection computer vision application running on the factory floor. Ultimately, you’ll want to share the training and knowledge you’ve acquired from one factory to another. And the only real, practical way to do that is going to be in the cloud.

So I do think there’s a place for the cloud when it comes to edge computing and, more specifically, AI at the edge—in the form of crunching big data that’s derived from edge-computed or edge-analyzed data. And then, in addition, training AI workloads to be redistributed back to the edge to become more efficient and more impactful and insightful to users.

Jason Shepherd: What we say at ZEDEDA is: The edge is the last cloud to build. It’s the fringes of what the cloud is. There are three buckets there. One is cloud centric, with lightweight edge computing and then a lot of heavy crunching in the cloud. A second one uses the power of the cloud to train models, and then deploys, say, inferencing models to the edge for local action. So it’s a cloud-supported, or cloud-assisted, model. And third, there’s an edge-centric model, where there might be training in the cloud, but all the heavy lifting on the data is happening on-prem. So, as Blake said, it’s not one-size-fits-all.

If manufacturers lack the proper IT expertise, what tools or technologies might help?

Jason Shepherd: Is a fair answer ZEDEDA?

It really is about finding the right tools, and then applying domain knowledge on top. There are a lot of people who have domain knowledge—the experts are the folks on the floor. But when you’re trying to deploy in the real world, you don’t usually have the staff that’s used to scripting and working in the data center space. Plus, the scale factor is a lot bigger. That’s why ZEDEDA exists: to just make that process easier and, again, to provide the public cloud experience all the way down into the field.

Where does Lenovo and its partnership with Intel® fit into this space?

Blake Kerrigan: The value of the relationship with Intel goes beyond just edge computing, and Intel is our biggest and strongest partner from a silicon perspective when it comes to edge computing. It holds a lot of legacy ground in the embedded space, the industrial PC space. But the other side of it is that Intel continues to be at the cutting edge. It continues to make investments in feature functions that are important at the edge—not just in data center, and not just in PC.

OpenVINO sits within the larger ecosystem of tools from Intel, but another one I really like—because it helps our customers get started quickly without having to send them four or five different machines—is Intel DevCloud. It lets those customers get started in a development environment that is essentially cloud based. They can control all sorts of different parameters, and then run applications and workloads in the environment. This creates efficiencies in terms of time to market or time to deployment.

At Lenovo we want to be able to create the most frictionless experience for a customer trying to deploy infrastructure at the edge, which is why Lenovo and ZEDEDA really complement each other in their alignment with Intel.

Jason Shepherd: ZEDEDA is basically a SaaS company—all software, but coming from the hardware space. And hardware is hard, so partnering with Lenovo makes things simpler. It’s important to work with people who are building reliable infrastructure.

Any final takeaways for the edge computing journey?

Blake Kerrigan: As Jason mentioned, hardware can be hard. I think a lot of people start there, but it’s not necessarily the best first step—though I say that coming from a hardware company. But at Lenovo we still do want to be a part of that first step on the journey. Reach out to our specialists and see how we can help you understand what the potential roadblocks are. And then we can also open you up to our ecosystem of partners—whether that’s Intel or ZEDEDA or others.

Bring us your problems, bring us your biggest and most difficult problems, and let us help you design, implement, deploy, and realize those insights and outcomes.

Jason Shepherd: It’s all about ecosystem. Invest in community so you can focus on more value.

This isn’t about free; it is about making money and all that. But it is also very much about partnership.

Related Content

To learn more about edge computing in manufacturing, listen to Manufacturers Unlock AI at the Edge: With Lenovo and ZEDEDA and read Cloud Native Brings Computer Vision to the Critical Edge. For the latest innovations from Lenovo and ZEDEDA, follow them on Twitter at @Lenovo and @ZededaEdge, and LinkedIn at Lenovo and Zededaedge.

 

This article was edited by Erin Noble, copy editor.

embedded world 2022 & COM-HPC: Exceeding Expectations

By almost every measure, the 2022 embedded world Conference & Exhibition was better than expected. Show attendance rebounded to more than 18,000 embedded technologists, a 50 percent increase over 2020. Another 3,900 participated in the show’s nascent digital and hybrid content. And on the vendor side, 720 exhibitors from 39 countries demonstrated their proficiency in IoT, edge AI, and functional safety electronics.

Events like embedded world are an opportunity to step away from engineering benches and take in new trends, techniques, and solutions shaping the next generation of intelligent, connected electronic systems. That was certainly the case this year, where a new young cohort of technologists made their intentions of moving the IoT from prototype to production known.

In concert, there was also considerable momentum around shifting from prototyping hardware to production-ready solutions at the show’s booth demos. And many of these were built around PICMG COM-HPC and 12th Gen Intel® Core processor-based devices (previously codenamed “Alder Lake”).

COM-HPC is a next-generation #computer-on-module standard that defines a series of higher-speed, higher-performance, and higher-power client- and server-size modules designed for next-generation #edge workloads. @embedded_world via @insightdottech

COM-HPC is for Real, and It’s Edge-Ready

As an open industry standard being released live on a big stage for the first time, it should come as no surprise that the new COM-HPC family of computer-on-module specifications was well represented at embedded world 2022. But “well represented” might be an understatement, as companies like ADLINK Technology, Advantech, Avnet Embedded, congatec, Kontron, SECO, and more made it a centerpiece of their show activities.

COM-HPC is a next-generation computer-on-module standard that defines a series of higher-speed, higher-performance, and higher-power client- and server-size modules designed for next-generation edge workloads.

Christian Eder, Chairman of PICMG’s COM-HPC working group and Director of Product Marketing at congatec AG, a leading supplier of embedded computer modules, was at the event to launch the new standard. He explains how the upgraded COM-HPC connector almost doubles the pins of previous-generation standards and supports interfaces like PCIe Gen 4, 5, and 6 and 25 GbE to deliver unprecedented bandwidth for edge systems (Video 1).

Video 1. congatec’s Christian Eder discusses the benefits of COM-HPC compared to COM Express. (Source: insight.tech)

This allows end users to fully utilize the unique performance of new Intel® Xeon® D and 12th Gen Core processors in a multi-vendor, off-the-shelf solution that safeguards technology investments.

Another benefit is that larger COM-HPC form factors dissipate more heat, which opens a path to higher-end processors like those just mentioned at the far edge (Video 2). This also happens to positively impact connectivity, as some of these processors—like select 12Gen Core devices—support emerging technologies like Ethernet Time-Sensitive Networking (TSN), according to Kontron, a leader in embedded computing technology.

Video 2. Kontron’s Martin Unverdorben discusses how COM-HPC will accelerate the deployment of IT/OT infrastructure. (Source: insight.tech)

The Wide World of COMs

Avnet Embedded has also seen increased interest in COMs across the board since the pandemic pushed supply chain issues over the tipping point, as a COM-based approach can reduce development time, complexity, and deliver most of the electronic components required by a system design from a single source. At the show, the company discussed how its in-house design expertise and partnership with Intel helps drive intelligence into end markets ranging from smart agriculture to electric vehicle charging (Video 3).

Video 3. Alex Wood, Marketing Director for Avnet Embedded, highlights the versatility and efficiency of the latest Intel® architecture-based COMs. (Source: insight.tech)

On the data logging front, ADLINK Technology enables the IoT edge with COM-HPC by taking advantage of the expanded pinout in systems like rugged industrial servers. At the show, the company explained how this flexibility—along with the longevity afforded by replacing COM-HPC modules but retaining application-specific carrier boards—combines with the company’s Edge IoT software stack to help companies quickly connect applications with cloud providers like AWS, Microsoft, or other server infrastructure (Video 4).

Video 4. ADLINK Global Account Director Marco Krause explains when businesses should make the move to COM-HPC. (Source: insight.tech)

SECO is another company working to push data from the edge into intelligent cloud-based applications using a flexible software platform, Clea, which SECO’s CPO Maurizio Caporali described at embedded world (Video 5).

Video 5. SECO discusses the importance of data to a business’ success. (Source: insight.tech)

To serve the diverse embedded market, the Clea Edge SDK interfaces data from a range of targets, including power-efficient Atom®, Celeron®, and Pentium® modules and high-performance computing 12th Gen Core and Xeon D processors devices with integrated functional safety capabilities. In addition to COM-HPC, these solutions are also available in industry standards like SGeT’s SMARC and PICMG’s COM Express.

It’s important to note that the introduction of COM-HPC doesn’t mean the end is near for other standards like COM Express at all. Claus Giebert, Business Development Manager for embedded and automation solutions provider Advantech, responsible for CPU-based COMs, revealed why “COM Express, for many existing application areas, will remain the dominating form factor for many years to come.” (Video 6). For those needing faster data transfers, more I/O, and higher performance, COM-HPC offers a path forward.

Video 6. While COM-HPC comes with many new benefits and features, Advantech says COM Express will still be relevant for a long time. (Source: insight.tech)

But there’s also the option of merging next-generation processors with current-generation standards. For example, during the exhibition, Prodrive Technologies introduced its 95 mm x 95 mm Atlas 12th Gen Series COM Express Compact Module based on i3, i5, or i7 P-series Core processors (Figure 1).

The Prodrive Technologies Atlas 12th Gen Series COM Express Modules blend the latest Intel® Core™ processor technology with the existing COM Express standard.
Figure 1. The Prodrive Technologies Atlas 12th Gen Series COM Express Modules blend the latest Intel® Core processor technology with the existing COM Express standard. (Source: Prodrive Technologies)

Alder Lake Everywhere

Of course, Intel technology on display at embedded world 2022 was not limited to just COMs. Elsewhere, companies showcased their latest solutions based on 12th Gen Core processors in all manners of form factors and systems such as the Supermicro SYS-111AD-HN2 1U Embedded Systems and SYS-E300-13AD Mini-1U Super Servers.

The key takeaway is that more so than in normal years, the return of embedded world injected a lot of newness into the embedded-technology sector. New 12th Generation Core processors could be found almost everywhere, and new industry-standard form factors capable of supporting them like COM-HPC were widely available. A sizeable number of new-to-the-industry developers were also in the crowd, preparing to usher in a new phase for connected embedded and IoT systems: mass commercial deployment.

Together, all this suggests big things are in store for the next iteration of edge computing.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

AI Self-Checkout Eases Retail Experience

It’s a Saturday afternoon and the grocery store is full of shoppers. Like any other busy store on the weekend, the checkout queues are long. The best option is to head to the self-checkout lane.

Why? Self-service registers make the checkout process easier, and the lines move faster. You can review the price of your items as you scan them. You can bag your groceries how you like. It feels like a mini victory during an otherwise routine errand.

But when it comes to non-packaged items, self-checkout can be a painstaking process. You must search for the item name on the display, or find the sticker on a piece of produce, and then manually enter it into the system. And what happens if the system doesn’t recognize the item, or you make a mistake in entering the code? You will have to wait for assistance.

That’s why retailers leverage innovative smart retail solutions, like making self-checkout even better. AI, computer vision, and cloud technologies work together to help solve the small nuisances that shoppers encounter to make big improvements to the in-store shopping experience.

Thanks to companies like Wintec, a provider of smart retail solutions, this possibility can be a reality.

“The self-service checkout option is great. So many shoppers prefer it because it saves them time—they can handle everything themselves. But there are still flaws in the existing systems that deter customers from using that option,” says Lu Xuefeng, General Manager of the AI Division at Wintec.

AI-Powered Solutions Account for Human Error

The manual weighing process relies on staff to accurately identify many types of goods—from peaches to watermelons—and find the corresponding product code. Unskilled staff may encounter problems like inputting the wrong code or taking too long to find it. This leads to significant delays in the weighing and labeling process. During peak retail hours, customers may have to wait in especially long queues.

On top of that, using manual scales means spending time training staff initially and when new products are added to the store. As produce varies seasonally or by market, training requirements can be high. And as labor costs continue to rise, operational costs increase as well.

This aspect of the checkout process is hard for staff, but even harder for customers. Today, CV and AI enhance image recognition capabilities at the register—offering a better customer experience and lower costs for retailers.

Created to help #retail stores realize #ImageRecognition and automated weighing of prepackaged foods, the Wintec Smart Weighing solution identifies, weighs, and prints price tags for items. @Wintec_China via @insightdottech

Cloud-Edge-AI Architecture Enhances Checkout Systems

Created to help retail stores realize image recognition and automated weighing of prepackaged foods, the Wintec Smart Weighing solution identifies, weighs, and prints price tags for items. And through continued CV-enabled model training, it also provides high levels of detection accuracy.

The solution enhances image recognition through powerful edge-to-cloud computing.  The software is trained for inference and real-time data processing, coordinated with cloud-based image model training using the YOLOv3 algorithm. This algorithm uses machine learning to achieve target recognition of fresh-food images by converting the task into a regression problem.

During the subsequent filtering process, the appropriate bounding box will be selected. By integrating object detection and object localization into a single one-stage network, YOLOv3 significantly increases detection speed.

“We created the solution to fix the hassle of manually inputting information for non-packaged items. But we weren’t focused on just that aspect,” says Xuefeng. “We wanted to create something that would continue to get smarter as it received more data. We also wanted to be able to take that data and make it available to retailers so they could gain useful insights from it for their overall operations.”

For instance, the solution integrates weighing functionality into the payment terminal, allowing customers to check out without visiting a weighing station. And eliminating the need for barcode readers helps reduce labor costs and expenses incurred from the maintenance and replacement of equipment.

Powerful Hardware and Software for Smart Retail Solutions

The Wintec Smart Weighing solution is built on Intel® processors, which provide powerful computing performance, safety, and reliability at low power consumption. These capabilities are essential for running edge AI workloads. The system also uses the Intel® OpenVINO toolkit to help with optimizing image recognition applications.

This is particularly useful in providing retailers with the ability to automate their entire business from the selection of goods to the weighing and checkout. Beyond that, real-time data processing provides actionable insights that help inform and facilitate business decisions in areas of operations beyond checkout.

The Future of Retail Automation

With transformative technologies and solutions available to the retail industry, the possibilities are almost endless.

As AI and automation technology continues to evolve, they can be implemented for the retail market segment beyond supermarkets and grocery stores. While requirements may differ based on format and application scenarios, tailored AI solutions have huge potential.

Grocery stores and other retail businesses are undergoing digital transformation. AI self-checkout is just one way retailers can improve customer satisfaction, increase competitiveness, and grow profits.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

This article was originally published on July 14, 2022.

Predictive Maintenance Edge AI Gets Railways Back on Track

If I had to pick one system to use predictive maintenance technology on until the end of time, it would be a train. Modern, AC-powered engines are complex systems that can cost upward of $2.3 million. By using predictive maintenance AI algorithms to monitor sensors, actuators, and control subsystems for anomalous behavior, rail operators can reduce costs, drive revenue growth, and maximize their return on investment.

But to do predictive maintenance work effectively, data must be transportable across smart railway communications networks. For instance, operational data must be easily accessible and available to be used for detecting anomalous train behavior, training machine learning models, and informing edge algorithms what to perform inferences against. That’s why the EU-funded SCOTT (Secure Connected Trustable Things) project was launched with a focus on building trust in wireless solutions, like autonomous wireless networks (AWNs).

While the project addresses use of IoT devices, 5G, and cloud computing across 15 industrial use cases—including cross-domain applications and heterogeneous environments—it uses a standardized, ISO 29182-compliant multi-domain reference architecture that can be tailored to the requirements of smart rail transport use cases. The building blocks defined in the SCOTT reference architecture help map out the wireless technology and service architecture in these applications.

Software-Defined Wide-Area Networks Streamline Smart Railway Systems

One thing that is not evident from the reference architecture is how much connectivity already exists in railway environments today that AWNs could leverage—especially on passenger trains.

Consider that these vehicles already support information systems, train control, and passenger productivity and entertainment networks via access to a variety of wired and wireless communications mediums. In fact, one could argue that the primary challenge facing the SCOTT program is too much connectivity. For instance, the sensitive operational data required for predictive maintenance algorithms to function must be isolated from other, less critical traffic.

Securing and isolating this traffic could be achieved by installing additional, separate networks. Of course, this adds cost, complexity, and additional equipment for train engineers to maintain. And that seems like the wrong direction for a project focused on predictive maintenance.

Another option for segmenting operational data communications in this environment would be implementing a secure virtual private network (VPNs). But those too can get complex very quickly and become difficult to manage.

A third solution resides in the Goldilocks zone between cost and complexity in rail environments, while also adding capabilities you’d expect of a core network. Software-defined wide-area networks (SD-WANs) are intelligent network architecture, deployment, and management topologies designed to bring flexibility to edge AI technology and environments. They can run on top of off-the-shelf hardware, which allows users to extract the value of software intelligence while minimizing the cost and complexity of specialized networking hardware. 

SD-WAN Meets the SCOTT Program

The SCOTT program required the flexibility and openness of an SD-WAN given all the communications and amount of data types flowing across their multiple networks. Of course, they also needed a platform to run their SD-WAN on that could withstand the rigors of rolling-stock environments and provide security robust enough to keep the multi-ton projectiles that are trains out of hackers’ hands.

This led program stakeholders to Klas, an international design engineering company that focuses on communications solutions for the network edge. They eventually selected the company’s onboard compute gateway TRX R6 for their wayside communications and control needs (Figure 1).

The TRX R6 streamlines predictive maintenance by supporting SD-WAN-like capabilities, facilitating the ability to bond multiple channels into a secure tunnel for secure onboard connectivity with the Network Operations Center over public internet networks.
Figure 1. The TRX R6 streamlines predictive maintenance by supporting SD-WAN-like capabilities, facilitating the ability to bond multiple channels into a secure tunnel for secure onboard connectivity with the Network Operations Center over public internet networks. (Source: Klas)

The TRX R6 is an open, modular compute network mobile gateway platform designed for specific use on trains, light rail, and buses. It is a uniquely designed piece of equipment combining hardware and software solutions running on a range of multicore x86 Intel® Core processors. In addition, it hosts the advanced operating system Klas OS Keel, which is specifically designed to optimize the power of the Intel x86 processors and Klas hardware.

The operating system comes with a lightweight hypervisor, allowing applications to be supported on a single platform within virtual containers—which allows the operator to add features over time. KlasOS Keel also meets federal government security compliance and hosts a variety of advanced features like SD-WAN.

Because the Intel processor devices come with built-in hardware virtualization, Klas engineers could isolate the SCOTT program network stacks and applications by use case within securely partitioned virtual machines running on different cores.

KlasOS Keel was also critical to managing each VM to ensure the right resources were delivered at the right time for critical, latency-sensitive communications. As Mark Lambe, Senior Product Marketing Manager at Klas, explains, essentially the TRX R6 integrated hypervisor allows for multiple systems to be run as virtualized machines, delivering cost and space savings onboard for train operators.

With all this in place, Klas engineers went on to implement an SD-WAN that not only supported the new AWN requirements alongside existing networks but also offered a path for reducing the amount of networking hardware onboard trains in general.

In other words, after plugging in the appropriate connectivity modules to the primary TRX R6 host compute platform, the SD-WAN could route and prioritize the traffic from multiple heterogeneous networks as if it were traversing separate pieces of hardware. Therefore, operational data AWNs, passenger networking and information systems, and control networks that all require different levels of security and reliability could be managed according to their needs, thanks to the intelligence of the software and the openness of the hardware.

Real ROI on Railways Thanks to Predictive Maintenance

In addition to enabling a new network type while being cost-effective, what the SCOTT Project gained as a result of their partnership with Klas was the ability to host third-party applications regardless of the connectivity required—because, of course, it’s already supported. These applications can exist within the framework that the SCOTT project has defined, allowing train engineers and operators of other industrial equipment a straightforward path to deploying predictive maintenance at the network edge.

“When you standardize on a platform like the TRX R6, there is no need to forklift hardware when technology changes; you’re not having to retrain personnel on new hardware, operating systems, or management software,” says Arnold Allen III, Principal, IoT Industry Solution and Partner Development at Klas. “From a logistics perspective, you’re running spares and components are streamlined across all your vehicle platforms, which helps to reduce the cost of maintenance and ownership.”

If that’s not on the track to ROI, I don’t know what is.

 

This article was edited by Leila Escandar, Editorial Strategist for insight.tech.

This article was originally published on July 15, 2022.

Smart Stores Are on the Path to Net Zero

For businesses with large estates and many physical assets, managing energy use is a big deal. Letting assets go unrepaired, running heat when no one is in the building, and having lighting on a strict, unchangeable schedule not only hurts the bottom line—it also increases their environmental footprint.

Until recently, the best solutions for tracking and managing energy use produced reports that lagged by at least a day. To be fair, they looked pretty good compared with the next-best tool: paper forms.

But now, IIoT technology is evolving to let businesses with large properties easily connect to their assets to monitor—and even control—their energy expenditure. This helps them reduce waste and improve efficiency, with an eye toward eventually becoming exporters of energy.

#IIoT #technology is evolving to let businesses with large properties easily connect to their assets to monitor—and even control—their #energy expenditure. @harksys via @insightdottech

A Roundabout Path to Sustainable Buildings

The journey to sustainability is not always a straight line. Take Hark Systems, a provider of energy analytics and IIoT solutions. The company was originally founded as an IoT platform to monitor environmental conditions like temperature and humidity for pharmaceutical companies. “But then we were asked to do all sorts of crazy things,” says Jordan Appleson, Hark’s founder and CEO. “We were asked, ‘Can you monitor radiation in uranium mines? Can you monitor air quality?’”

Because the platform they’d built could work with almost any kind of asset or sensor, Hark was able to expand beyond pharma into other industries.

Soon, grocery stores started approaching the company about their energy challenges. Supermarkets are packed with energy-hungry assets like refrigerators, generators, bakery ovens, and heating systems. “They were spending £10 or £12 million a year in additional fees when energy prices changed,” says Appleson. “They wanted a way to monitor and react to that in real time.” But Hark discovered these retailers weren’t just worried about cost—they were also concerned about their environmental footprint.

Metallica Brings the Proof

The Hark Platform uses edge gateways powered by Intel®, running Hark software, and then connects the devices to energy meters, building management systems, and other physical assets (Figure 1).

The Hark Gateway runs on the edge and physically connects to assets.
Figure 1. The Hark Gateway runs on the edge and physically connects to assets. (Source: Hark Systems)

“Everything that we do today has been deployed on Intel at the edge in one capacity or another,” says Appleson. “You’ve got all these devices that speak all these different protocols, and you’re always going to need edge computing to cost-effectively bridge that gap. That’s what Intel-powered gateways do for us.”

Hark’s solution uses a machine learning model to forecast energy usage or suggest ideal actions based on historic information. Other data, such as information from occupancy sensors and weather forecasts, can come into play as well. For example, rather than setting a schedule where the lights go on at 9 a.m. and off at 5 p.m., a retail store can automatically lower the lights when there are few people in the store and turn them back on as occupancy rises.

Customers are often skeptical about how quickly all this can be set up. “I like to say, ‘Give us half an hour and a gateway, and we’ll get you up and running,’” Appleson says. He proved this to one customer by almost instantly controlling the lights in a massive building to flash on and off to the beat of a Metallica song.

Smart Stores, Smaller Footprint

Sainsbury’s is a big company with a big goal: It’s the second-largest supermarket chain in the UK, and aims to reach Net Zero by 2040—meaning the total amount of energy it uses is zeroed out by the amount of renewable energy it generates on-site.

In 2018, Sainsbury’s contacted Hark looking for a solution that would help them track, monitor, and control the energy their assets were using. An initial monitoring session revealed the assets that were guilty of consuming the most energy—including a broken piece of equipment that was drawing much more power than it should have. Sainsbury’s signed on to have the company implement the Hark Platform solution on 20,000 assets in 40 asset groups, including lighting and refrigeration.

In the Sainsbury’s implementation, Hark Gateways retrieve more than 2 million readings per day in each store, and stream the data in real time to a cloud-based dashboard. The platform can detect anomalies and send out alerts of potential issues with mission-critical assets; in fact, it’s identified problems that have saved 4.5% of lighting costs so far.

The solution can control certain asset groups via the edge gateway. “When the store opening times change, our system automatically receives that information and deploys a new automated schedule to the edge based on preset profiles,” says Appleson. “And when we have energy price spikes in the winter, within 60 seconds of an automated notification coming into our system from the utility provider, our system will orchestrate a profile change to reduce the load in the building.”

Sainsbury’s also needed to increase visibility into its assets. Before implementing the Hark solution, Sainsbury’s asset groups and industrial depots were “completely disparate,” says Appleson. Now the retailer can monitor and control everything from a central location.

Smart Buildings Will Have Power to Spare

Appleson suggests that in the future, businesses like Sainsbury’s will be able to become microgrids. Being a truly sustainable building means being able to generate your own power and sell it back to the grid, effectively getting carbon-free power.

Much of the technology needed for this to happen—such as solar panels, energy storage units, and platforms like Hark that connect and monitor these things—already exists. The electric network in the UK isn’t yet set up to track and bill for energy in this way—but when it is, Hark will be ready to lead the way to Net Zero for environmentally conscious businesses.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

AI at the Edge Spurs New Industrial Opportunities

The world is moving fast, and manufacturers must be able to keep up with the pace of change. Luckily, with technologies like AI, machine learning, computer vision, and edge computing, solution developers have the tools to help them do so.

And we are already seeing major results—both inside and outside the factory.

Product Defect Detection

For instance, smart manufacturers have started to deploy AI at the edge on the shop floor to reduce the risk of unplanned shutdowns and production issues. By automating the process with AI platforms like the Intel® OpenVINO Toolkit, image analysis can be performed directly on smart factory equipment, and workers can be quickly notified of any issues happening. This reduces manual work, which is prone to errors, and stops problems before they snowball.

And with tools like the Hitachi Industrial Edge Computer CE series Embedded AI model, which leverages OpenVINO, adding these advanced capabilities to the factory is made simple. For example, Hitachi can detect product defects and equipment issues from multiple production lines and devices simultaneously—speeding up the time it takes to alert operators and address the problem.

Supply Chain Management Gets Streamlined

OpenVINO is also being used outside the factory to tackle supply chain issues.

With just-in-time manufacturing, staying ahead of supply and demand was already on manufacturers’ minds well before 2020, but when the pandemic hit, a lot of pressure was put on their digital transformation timelines. It is now clear that traditional approaches no longer can keep up with new supply chain demands.

People are shopping in bulk more than ever, and home improvement projects have skyrocketed. As a result, consumers notice many shelves or supplies are empty or out of stock.

Smart #manufacturers have started to deploy #AI at the #edge on the shop floor to reduce the risk of unplanned shutdowns and production issues. @IntelIoT via @insightdottech

While manufacturers cannot consistently control the availability of raw materials, one thing they can handle is how their goods are delivered to stores. Instead of sending out goods as soon as possible, they can meet demands and save money by waiting until transport vehicles are at 100% capacity.

To do this, they need a combination of computer vision and AI, which allows them to constantly monitor shipping containers, fill vehicles, and alert managers with operation status. With advanced AI algorithms and edge computing, manufacturers can get dock occupancy status, wrong place detection, and even deploy automated robots to handle, load, and unload freights.

To ensure these AI applications don’t compromise on the smart factory’s performance, power, and cost, some manufacturers have turned to Avnet Embedded, a leader in embedded compute and software solutions. With its MSC C6C-TLU module, based on the 11th Gen Intel® Core processors and paired with OpenVINO, applications can withstand rugged environments, meet performance demands, and process data in real time.

Automating Human Response

And, of course, manufacturers have to worry about the human factor when it comes to product and equipment inspection. For instance, factory workers typically have their own way of doing things, which can be problematic when you’re trying to keep production and quality consistent.

With computer vision, AI, and machine learning, manufacturers can now pair human behavior analysis to its assembly line machine metrics to understand the performance of each operator.

Vecow enables this capability with the Vecow Human Behavior Analysis solution, which includes the VHub AI Developer software platform. With the solution, developers can create AI models and applications with computer vision capabilities. Those apps can be connected to a factory’s existing cameras to collect and analyze data at the edge and detect inconsistencies. Vecow leverages Intel® Core i5 and i7 processors for computing power, and OpenVINO for AI model generation.

Deploying AI to Robots

Finally, robots equipped with AI and computer vision are also handling quality control these days. For instance, when robotic arc welders are being used in high production applications such as the automotive industry, it can be difficult for a manual inspector to visualize and catch any potential defects.

But when you add solutions like the Edge Arc Welding Defect Detection from ADLINK, a global manufacturer of edge computing solutions, AI and computer vision can be added to the process. Powered by ADLINK Edge IoT software, OpenVINO, Intel Core processors, and Intel® Movidius Myriad X VPUs, robotic arc welders can capture, process, analyze, and act on data before issues become bigger problems.

These are just a small set of possibilities AI and high-performance edge computing can offer smart manufacturers. For those struggling to deploy some of these AI capabilities and boost their industrial applications, check out the Intel Edge AI Certification Program or take the 30-Day Dev Challenge.

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.