Digital Displays Feed Information Appetite

Today’s customers crave data—about where their food was sourced, how to care for their clothing, where a brand stands on sustainability issues—the more details the better. A self-service digital display that shares what customers want, when they want it, is an ideal way to highlight essential background and build brand affinity.

Beyond Digital creates design experiences for digital spaces that can entertain, educate, and even prevent emergencies, such as in the case of food allergens. That was one of the ways the company helped bolster the reputation of a large quick-service restaurant (QSR), which first approached Beyond Digital a decade ago to help burnish its brand image.

The initial partnership entailed digitizing print materials so the restaurant could share its message quickly and dynamically through digital menu boards. But easing the ordering process was just one benefit. These digital signs also allowed the restaurant to engage customers by highlighting additional information of note, from in-store promotions to company philanthropic endeavors.

Soon after, the QSR expressed a critical need to inform customers of possible allergens in its products and proactively arm staff and guests with details. Beyond Digital saw an opportunity to develop an innovative and market-leading interactive digital signage solution.

“Some companies will merely comply with the legal minimum for sharing allergy facts, but our client wanted to provide something meaningful that can be regularly updated,” says Louise Richley, Beyond Digital’s managing director. “And the self-service nature of the interactive kiosks removes the burden from staff to act as experts, because customers can explore the material themselves.”

This success opened the QSR’s eyes to additional impactful ways they could create a connection and feed guests’ desire for a personal experience. Using the digital kiosks, they began sharing information more broadly to guests while they queued or waited for their food.

The restaurant was able to showcase various menu items and promotions and add more holistic nutritional details, which today’s consumers want as they move toward healthier, clean eating.

Digital displays also highlighted corporate responsibility commitments and charitable work, along with other general brand news. They even created a recruitment section where guests could find out what positions were available and how to apply.

Even when no one is interacting with the digital kiosk, animated content scrolls through to attract attention. “While guests are waiting for their food, the restaurant can promote menu items or share how it sources healthy products,” Richley says. “The information also provide a talking point for the staff, making it easier for them to engage with the customer.”

Beyond Digital creates design experiences for digital spaces that can entertain, educate, and even prevent emergencies, such as in the case of food allergens.

Over time the QSR deployed several Self-Service Displays alongside multiple digital technologies into more than 2,000 sites throughout the U.K., reaching diners from High Street to retail parks, service stations, and transport hubs. With more than 6,500 displays, the QSR has modernized its brand image and delivers great customer experiences.

A Broad Roster of Capabilities

While the digital kiosk has proven popular for this particular QSR, Beyond Digital solutions can be deployed worldwide, anywhere that customers seek more information about their food, and more.

It’s just one example of the types of creative full-service solutions the company can design. Its team has the expertise to manage all aspects of a project: strategic development and content design, installation and software management, to collecting retail analytics that confirm impact.

“We do such diverse things that we can work with any customer who wants to use technology to communicate or share a solution,” Richley says. “Whatever the industry or client, we can develop a holistic solution that will make sure their message attracts attention. That’s the value of digital signage. It’s not one size fits all; it’s one size fits what you need.”

Retail Analytics Improve Engagement

A key advantage to the solutions Beyond Digital creates is that they can be dynamic and responsive to changing attitudes or preferences. By tracking retail analytics, the restaurant can measure what interests people most to add new capabilities or tweak what’s there to make it more compelling or easier to find. For example, it can track how many times people pressed a particular button and where they navigated after that.

“We can see that people are interacting with nutritional information and are very interested in clean ingredients but might need a different headline to click on local restaurant news,” Richley says.

With that continuous loop of feedback, it sources ongoing reports that help inform the content strategy and make the pages more intuitive and clear. “We find it much more meaningful to move from real-time analytics to these larger insights, where we can see where thousands of people are clicking,” Richley adds.

Fueled by Reliable Technology

Beyond Digital uses Intel® processor-based technology, which offers the reliability the agency requires to provide a premier experience as an integrator. “We use a combination of various software solutions in our installations, so we need to be incredibly confident in the quality of the underlying manufacturers and the fact there is a deep bench of support if we need it,” Richley says.

Digital signage is becoming imperative as a way for brands to engage with customers, but just mounting a screen on the wall will not yield the results you need, says Richley: “A seasoned digital integrator will bring a comprehensive blend of strategic planning, creativity, software knowledge, and technical expertise to execute a successful campaign that will provide a solid return on investment.”

Remote Monitoring: The Vision for Patient Safety

At a Connecticut hospital, one staff member  simultaneously monitors a dozen patients in real time—a job that used to take 12 patient sitters. As a result, the facility reduced falls, addressed staffing shortfalls, decreased costs, and met compliance requirements. While it may sound like a case of cloning or teleportation, the New England hospital was able to transform how it delivers care by using new virtual patient observation technology.

The facility deployed the NOVA – Nursing Observation and Virtual Assistant (NOVA)—developed by national solution and service provider Wachter, Inc.—to reduce the need for patient sitters and to monitor for patient falls.

And when the pandemic hit, they were able to immediately shift and use NOVA in their fight against COVID-19.

At the time, hospitals were struggling to find PPE to protect their healthcare providers. Using NOVA, nurses could monitor patients and ventilators remotely, reducing exposure, slowing the spread of the infection, and decreasing the need to don new protective gear every time they entered an isolation patient’s room.

The solution is also in use at Montage Health. For the California-based healthcare provider, cost was a driving factor.

“They had more one-to-one sitter requests than they were able to grant, and they wanted to utilize their staff and technology to the best of their ability,” says Ashley Kuruvilla, MSN, APRN, FNP-C, Business Development Manager and Clinical Support Specialist for NOVA Health. “During the four-month period after it was initiated, NOVA reduced the number of hours for those one-to-one sitters, saving money and resources.”

Patient satisfaction improved as well, and in unique ways. “One monitor technician would sing to the patients,” says Kuruvilla. “She would talk to them and play games with them. She became an exemplary model for other technicians. One patient stated they didn’t want to leave the hospital because of her, which is remarkable.”

Health Tech Adoption

While the value of virtual observation is clear, hospitals have been traditionally slow to implement technology that can monitor patients, instead employing patient sitters, who are costly, at risk, and in high demand.

“Financial concerns are one of the issues and fear of efficacy is another,” says Kuruvilla. “There can be a lot of uncertainty and hesitancy with using technology versus an actual person. Many aren’t completely confident that technology would be able to monitor several patients at once.”

Lack of time has been another hindrance, adds Devin Johnson, National Account Manager for the NOVA team. “A lot of times the clinical staff will think, ‘This is a big project that’s going to take up a lot of my time and bandwidth,’” he says. “We’ve tried to make deployment and training as simple as possible so that the nursing staff is more inclined to embrace the change.”

But COVID accelerated the adoption of new tools, such as NOVA, as healthcare providers recognized the need for remote care options. Once in place, other valuable uses quickly surfaced.

“It can facilitate virtual rounds and physician consultation,” says Kuruvilla. “Most commonly, it’s being used anywhere a hospital would typically have a one-to-one sitter, such as Pediatric environments, the E.R., Neurological, Rehab and Behavioral Health units.”

COVID accelerated the adoption of new tools, such as NOVA, as #healthcare providers recognized the need for remote care options. @WatcherInc via @insightdottech

A Look Inside Remote Healthcare

NOVA, which is available as a mobile unit, portable wall mount, or fixed asset, uses high-definition cameras with 360-degree views as well as pan, tilt, zoom, and night vision capabilities. Two-way audio and video communication with a multi-language translation feature lets the remote caretaker communicate directly with each patient in their native language or notify nurses if aid is needed (Video 1).

Video 1. Video observation technology helps a hospital staff member monitor up to a dozen patients. (Source: Wachter)

The technician can visualize the room and identify any behaviors that require intervention or risk reduction, such as a patient who is pulling out tubing or trying to get out of bed. Nursing administrators can use NOVA to see how they can better improve their services.

The platform uses Intel® Core and Intel® Xeon® processors, which allows the solution to be scaled quickly. “Intel brings credibility right away, which makes the conversations about NOVA that much easier,” says Johnson. “We’re able to reach a broader audience as a result.”

Safety Is a Two-Way Street

While NOVA protects patients from harm, it also protects the hospital staff. “We don’t want a nurse to enter a room and not know what they’re walking into,” says Johnson. “This is primarily the case in emergency rooms as well as behavioral health units.”

The system tracks events that are performed to redirect behaviors, so the nursing staff can be sure the solution is appropriate for each individual patient. Not every patient may be a great candidate for monitoring. It can detect when someone has been redirected multiple times through NOVA’s Intervention Event Tracker. While it doesn’t predict at-risk behavior, the Sound Intelligence feature provides aggression detection analytics, such as yelling. Specific decibel levels trigger an alert, and the technician can look into the room or call security.

Being able to monitor patients 24×7 gives peace of mind to nursing staff, but it’s just the start.

“The future of virtual patient observation is the addition of more AI features, plain and simple,” Johnson says. “Any new technologies that can enhance patient care and alleviate the load placed on medical professionals will only help make the future brighter. As Wachter goes deeper into integrations, we will allow caregivers to provide a more holistic patient experience.”

Next-Gen MES Connect OT and IT in the Smart Factory

Manufacturing execution systems (MES) have been in use for ages. Traditionally, these resource-driven, monolithic systems support specific functions such as human resources, quality assurance, and supply chain management. But today’s MES have a lot more ground to cover. On the path to digital transformation, the smart factory is generating reams of data on the operational technology (OT) side. When harnessed, that information can power applications such as predictive maintenance and quality control—for more agile and streamlined processes.

With a new world of data emerging, manufacturers need MES solutions that offer a broad range of functionality and integrate easily with other enterprise systems, such as ERP and warehouse management.

To gain more value from these new streams of operational data, MES are evolving to marry OT and IT data in fully transparent systems. Manufacturers benefit from having an open software platform that allows them to use third-party apps and add capabilities such as augmented reality and predictive maintenance.

With this level of flexibility, they can tailor an MES to their needs by choosing—or creating—apps that solve a wide variety of challenges, from machine monitoring to production control, and order management.

Transparency Yields Smart-Factory Efficiency

A multifunctional MES can greatly reduce material waste, save energy, and increase productivity. For example, a Germany-based manufacturer of sanitary peripherals for water taps needed to coordinate its entire process from casting and polishing the metal taps to assembling them with additional plastic parts.

The company produces every component and manages several production sites, so it must coordinate not only the steps of production and packaging but also manage supplies and inventory. Production sites are scattered all over Germany and beyond, and have to be synchronized to streamline production.

Accurate planning across all processes, so everything flowed together in the right timeframe, was an essential requirement. To make it a reality, the manufacturer worked with MPDV Mikrolab GmbH, a leader in IT systems for manufacturers, to deploy MES HYDRA X.

The solution provides a variety of manufacturing applications (mApps) to track each production step and the time it takes to complete them (Video 1). For example, managers use an app to create digital twin models of the production process, while shop floor workers have access to apps that provide operator guidance and assembly control. There’s also an app specifically for tracking and tracing materials while they’re in transit, another for managing inventory in each facility or warehouse, and one that monitors the packaging process.

Video 1. mApps can master specific smart-factory use cases. (Source: MPDV Mikrolab)

Teams enter and combine data across the system, which is accessible at several levels of management. “Workers on the shop floor, supervisors, controllers, data analysts, and HR professionals all have easy access through a mobile app or web-based interface,” says Markus Diesner, Marketing Specialist Products at MPDV. Data input comes from both humans and digital sensors.

With that data, they can estimate when an order will be finished.

“Everything is in one plan and one system,” says Diesner. The solution also tracks quality throughout the process and triggers production of more raw materials at the beginning of the process to compensate for any defective pieces detected along the way. “It’s all about efficiency—and you can only attain efficiency when you have transparency between systems and processes,” Diesner says.

A multifunctional #MES can greatly reduce material waste, save energy, and increase productivity. @MPDV_gmbh via @insightdottech

Full-Service Integration and Support

Manufacturers can integrate the solution with their existing systems, such as the ERP and warehouse management system (WMS), creating a greater transparency that helps drive efficiency. Adding interfaces allows them to connect disparate systems to retrieve data and send directions.

“For example, a company can connect directly to the WMS and tell it that the process will need material, or that there is excess material that must be transported to the stock,” Diesner says. “This can be done without going through the ERP system, saving time and effort.”

The Hydra X platform is certified to run on Intel® processor-based gateways such as the Dell Technologies Edge Gateway 5000 Series. “We benefit from Intel’s broad product portfolio for servers, clients, and edge devices,” says Diesner. “And our customers profit from the stability of the whole system.”

Along with its software platform, MPDV provides upfront consulting, deployment, and ongoing support services to help companies craft their own version of Industry 4.0.

As manufacturing facilities becomes smart factories, MES will evolve alongside them. “As these systems continue to develop, the interoperability of apps will become more and more important,” Diesner says. “Companies will have the freedom to customize the system to fit their needs even more than they can today, picking and choosing aspects from different developers. The philosophy of getting an all-in-one solution from a single provider will become a thing of the past.”

Benchmarks Influence AI Server Design

Processor manufacturers are racing to entrench themselves in the growing AI market. As a result, a slew of computing products have been introduced or reinvented to serve AI use cases. These include well-known processing options like CPUs and GPUs, and more novel solutions like vision processing units (VPUs).

But as these devices make their way to deployment in real-world systems, the datasheet performance specifications become essentially meaningless. What matters to design engineers is how processors fare in particular use cases. They want to know about features and optimizations that can boost efficiency, reduce cost, lower power consumption, and enable new capabilities.

But evaluating multiple solutions against these parameters can take a lot of time.

 Engineers at ComBox Technology, an IT and neural networks systems integrator, made time to benchmark several computing solutions, before designing its AI server. They measured different options based on cost per frames per second (FPS) of executing AI algorithms—a key measurement for calculating ROI in computer vision systems.

“What we found is that the Intel® NUC8i5BEK, based on 8th generation Intel® Core processors, provided the most value in these workloads—with an average cost per FPS of just over $4.00 per month,” says Dmitriy Rytvinskiy, general director at ComBox.

Processor Cost per FPS Revealed

The ComBox engineering team began their cost-per-FPS experiment with several options for the main deep-learning processor. These include chips, graphics cards, and accelerator modules from multiple vendors

They tested these platforms using two popular image classification convolutional neural networks (CNNs): U-Net and DarkNet-19. The ComBox evaluation used image input sizes of 768 x 512 and 576 x 384 pixels for the U-Net algorithm, and 256 x 256-pixel image data for DarkNet-19.

The two CNNs were run separately and on individual processing elements, even within the same device. In other words, devices that contain both a CPU and GPU or integrated graphics unit—like select Intel Atom® processors, Intel Core processors, or Intel NUC platforms based on either processor—were tested more than once. In all cases, the neural networks were optimized with frameworks like the Intel® OpenVINO Toolkit or TensorFlow/TensorRT engines.

To calculate the value of each contender, ComBox testers simply divided the cost of the product by its FPS performance per workload and selected the device that provided the most value across all of the workloads. And as noted, the NUC8i5BEK provided the best cost/performance value.

After going through the process of testing #AI compute alternatives, ComBox has produced a power-efficient, performant inferencing solution for many types of workloads. @insightdottech

Beating the Benchmark with a Video Encode/Decode Cheat Code

The NUC8i5BEK is built around the Intel Core i5-8259U. But in the ComBox battery of inferencing benchmark tests, it was not the device’s CPUs alone that provided the most bang for the buck. It was the integrated Intel Iris Plus 655 graphics unit. But that’s not the only trick up the NUC’s sleeve.

While the Core i5 CPU cores played no part in the algorithm execution itself, they did handle the image encoding and decoding, allowing the graphics unit to remain dedicated to the inferencing workloads. This isn’t to say that other SoCs and cards in the benchmark didn’t take advantage of a similar architecture. Some did. But the combination of Intel® Iris® Plus 655 graphics, the multi-threaded quad-core CPU, and OpenVINO outperformed them all at a lower price point.

“Based on the benchmark results, we designed the NUC8i5BEK into our server,” says Rytvinskiy. The platform can simultaneously execute neural networks against up to 80 Full HD IP camera video streams.

Packaged for AI and Vision Processing Power

Off the shelf, the NUC is packaged as a complete system with an enclosure, I/O, and other trimmings that make it ready for uses ranging from prototyping to light commercial deployment. But clearly, the form factor and packaging is not suitable for integration into a rack server, so the ComBox team integrated the NUC motherboards, eight at a time, into a 1U server rack (Figure 1).

ComBox server with 8 Intel NUC motherboards in a single server.
Figure 1. The ComBox 8xNUC Rev 2 server is built around eight Intel® NUC8i5BEKs stripped down to their motherboards. (Source: ComBox)

The eight hot-swappable NUC modules are accompanied by two hot-swappable power supply units (PSUs) and a front-panel display that provides control over the modules. Collectively, the eight modules provide 32 cores and 64 threads of processing power, with a combined total of 3,072 integrated GPU cores and 1GB of EDRAM.

“According to our own modeling, the 8xNUC-based server can outperform other solutions by allocating the right amount of resources to each workload,” Rytvinskiy says. “And because of the NUC’s low cost per FPS, the server is just half the cost of similar platforms based on alternative AI processing technologies.”

Designed for AI and DL

After going through the process of testing AI compute alternatives, ComBox has produced a power-efficient, performant inferencing solution for many types of workloads. The company has published a paper illustrating how NUCs can be used to create high-efficiency, low-cost AI solutions, including one project to build a computer vision-aided smoke detector based on the same NUC8i5BEK.

While perhaps unexpected, the familiar setup of Iris Plus 655 and Core processor CPUs bring more value to CV inferencing than even newer AI processing solutions. So why spend more for less?

Digital Transformation Goes Edge-to-Cloud

Hybrid cloud environments are the new normal. Currently, 82 percent of organizations have hybrid strategies combining public and private clouds with on-premises infrastructure. But now those environments increasingly include edge computing as well.

Edge computing places hardware as close to data sources as possible. Why? “The main motivations are data latency and data sovereignty,” explains Derek Pounds, cloud consultant at solutions provider World Wide Technology (WWT).

In an edge-to-cloud context, this means bringing the capabilities of the public cloud into local environments. Specifically, companies can leverage platforms like Microsoft Azure Stack, AWS Outposts, and Google Anthos to deploy these cloud providers’ services on-site.

While this arrangement can provide powerful benefits, putting together an edge-to-cloud architecture can be a monumental undertaking. But with the right mix of business case assessments, careful testing, and the latest in pre-validated solutions, companies can smooth the path to success.

Why the Edge Matters

The growth of enterprise Internet of Things (IoT) applications is a primary driver behind the emergence of edge-to-cloud architectures. These IoT applications often require real-time or near-real-time processing.

Examples can be seen in telemedicine, smart cities, oil and gas, and other critical functions that demand reliable connections and the ability to respond immediately to a problem. If data must travel hundreds of miles to a cloud, latency is inevitable; hence the need to keep data close to sources and users.

And because it keeps data at local sites, the edge also enables compliance with data sovereignty requirements. “Where data can’t leave a particular site, data sovereignty becomes very important, especially with EU-based data that can’t move outside of country locations,” says Pounds.

Sovereignty also matters at a more granular level: Companies want to protect their data. Thus, the most sensitive data—such as proprietary designs—is typically kept on-premises with access granted to the narrowest set of employees. Other types of data can stay on-premises or move to the cloud.

Taking the time to understand the business is key to deciding “what portions of the business we can move to the cloud or how we can move them to the cloud in stages or phases” to ease the transition, Pounds explains.

Assessments, Please!

For many companies, this mapping exercise is a major hurdle: It involves tasks and technologies they have not encountered before. To accelerate the process, companies can partner with service providers such as WWT, which has deep experience deploying edge-to-cloud architectures.

We can work with the customers, set up an environment that they can test in, show them how all the tools and applications can work together, and how that can be bridged up to the #cloud.” @wwt_inc via @insightdottech

“We’re always having these discussions with customers,” says Pounds. “A portion of it involves a professional services assessment looking at: What are the roles of individuals? What is the role of data? What is the classification of both of those?”

In medicine, for instance, healthcare providers often need to monitor patients from a distance. “Think of diabetics, blood pressure, oxygen,” explains Pounds. “All those can be measured and monitored through an edge solution and then aggregated out against multiple users.”

In this scenario, the main objective is to keep analytics for individual patients at the edge—which minimizes both the security risks and network traffic—while still enabling a global view of population health trends.

Another scenario involves oil pipelines. By installing intelligent valves, oil companies can continuously monitor and adjust flow rates for better efficiency. Here, the main issue is speed. “The latency is incredibly important to be able to get that data right in real time,” says Pounds, explaining that sending data to the cloud would simply take too long.

Matching Platform to Use

But which cloud platform should you choose? Deciding between Anthos, Azure Stack, and Outposts to support an implementation comes down to the nature of the existing IT environment.

In multi-cloud environments, for instance, an Anthos solution may be a better fit because it uses Kubernetes to run multiple infrastructures simultaneously. In other cases, Pounds says, it makes sense to combine pieces of the platforms in a best-of-breed approach.

Above all, choosing the right approach—and achieving successful deployment—depends on a strong working relationship with the cloud service providers. “One of the first and foremost things that we offer is the relationship we have with the vendors,” he says.

These relationships have enabled WWT to leverage Intel® Select Solutions in edge-to-cloud implementations to simplify and accelerate deployment. Intel Select Solutions combine compute, storage, networking, and software in preconfigured, validated packages.

Intel Select solutions, says Pounds, can be optimized for edge-to-cloud implementations to deliver localized performance and reduce latency while enabling globalized data analytics to support long-term strategic goals.

And, of course, it helps to get an unbiased opinion—and here WWT’s role as an intermediary is uniquely valuable. “What we bring to the table is neutrality,” notes Pounds. “I can show you the pros and cons of a solution, and match that against the pros and the cons of the other solutions that are out there.”

Test Before You Deploy

But even with the best-laid plans, the complexity of edge-to-cloud architectures leaves numerous opportunities for things to go wrong. That’s why WWT created its Advanced Technology Center (ATC).

By testing their ideas in the ATC, customers have the opportunity not only to explore their options but also to validate functionality before an implementation goes live.

“In the ATC, we have Azure Stack, Anthos, and Outposts, we can work with the customers, set up an environment that they can test in, show them how all the tools work together, how all of the applications can work together, how that can be bridged up to the cloud,” says Pounds. “That’s just not something others can do.”

SOCs Go Virtual With Distributed AI Video Analytics

When the pandemic sent droves of employees home, many organizations faced a huge problem. How can facilities and critical infrastructure be secured, when there’s no one there to secure them?

Almost overnight, the security operations centers (SOCs) that house security technology, and where employees monitor video, became a risk to people’s health. Many companies that rely on them had to work on decentralizing their safety and security technology fast, to keep up with their suddenly decentralized workforce.

Remarkably, forward looking companies shifted to virtualized security infrastructure in 30 days—an incredible feat, given all the new challenges it presents. But companies with traditional SOC designs faced a big challenge. How can security personnel monitor video from home, when all of the video is stored and processed at the SOC?

AJ Frazer, Vice President of Business Development at Agent Video Intelligence (Agent Vi), an AI video analytics solutions provider, says this is the main reason some still resist virtual security operations: “They need their operators to work remotely but have no easy way to give them access to the video in the SOC.”

The company addresses this challenge with its innoVi AI-Powered Video Analytics Software, a distributed analytics engine, using the same technology it deploys for citywide security.

“We run a very sophisticated algorithm on a low-powered edge gateway on the camera network, connected to a centralized core in the cloud,” says Frazer. “We need only a kilobyte or two of data going back to a cloud, because all of the heavy lifting is done out at the edge. Operators who have login credentials can monitor video and respond to events from anywhere, be at home or a corporate office.”

When AI-powered video analytics are available in configurations suited to low- or high-data environments, effective video surveillance is accessible to all. @AgentVI via @insightdottech

The Secret Sauce: Edge-to-Cloud AI

Humans are excellent at watching video and detecting anomalies—for a minute or two. After that, boredom sets in and we struggle to pay attention. On the other hand, AI can watch endless reams of video without getting tired. So why should security personnel spend time trying to parse meaningful data from video footage when software can do it for them?

Agent Vi’s hybrid edge-to-cloud architecture manages video data in the most efficient way possible. Edge software runs at the customer’s local camera network and does most of the processing there, after which events that need review are sent to the cloud. From there on, the SOC or it’s operators can be located practically anywhere (Figure 1).

The five elements of innoVi edge to cloud architecture
Figure 1. A 5-layer edge-to-cloud architecture is the key to decentralized security. (Source: Agent Vi)

innoVi Edge—an Agent Vi preloaded appliance or software running on existing hardware—collects camera data, stores images locally, and extracts metadata. This information flows via the internet or a local network to innoVi Core Middleware (cloud-based SaaS or on-premises private cloud) for central management, advanced analysis, alarm generation, health monitoring, and metadata storage. And finally, operators use the browser-based innoVi Portal to monitor alerts, investigate video, and generate reports.

Thanks to the Intel® OpenVINO Toolkit, innoVi AI models and algorithms are flexible and scalable across Intel® processor platforms. The software can run on low-power embedded edge appliances, or high-end servers connected to hundreds of cameras. So a customer’s hardware environment is never a limiting factor.

Securing Critical Infrastructure

innoVi’s architecture is a boon for businesses dealing with a newly remote workforce. And Frazer says such a distributed video analytics engine is also ideally suited to provide security for roadway transit, utilities, and the energy sector. Disruptions to bridges or to the power supply, for example, could be disastrous for cities. But until now it’s been challenging to use AI there because their asset are geographically dispersed with limited data networks.

No longer, because innoVi doesn’t need much bandwidth. Network constraints aren’t an issue.

So whether it’s a utility spread across three states, roadways across California, or a company with employees all over the country—organizations can keep their people and their structures safe. When AI-powered video analytics are available in configurations suited to low- or high-data environments, effective video surveillance is accessible to all.

Seven Dirty Secrets of IoT

Amol Ajgaonkar, IoT projects

[Player]

Did you know that up to 75% of IoT projects fail to meet their goals? Sounds scary, but once you know a few of the industry secrets, you can join the successful group. In a conversation with Amol Ajgaonkar, CTO for Intelligent Edge at Insight Enterprises, we dig into seven ways you can establish a solid path to success. Insight helps organizations—and the SIs that serve them—accelerate their digital transformation by unlocking the potential of the IoT.

Join us as we share expertise on topics including:

  • Why it’s essential to have a holistic vision, from the edge to the cloud
  • How having stakeholder buy-in in before you start can lead to success
  • What a plan for managing and maintaining all these systems over time looks like
  • How to best make use of your existing infrastructure
Apple Podcasts  Spotify  Google Podcasts  

Transcript

Amol Ajgaonkar: When you dive into solving any solution and say, “You know what? This is a great candidate for IoT or intelligent Edge.” The one thing that needs to be answered before you do anything, or even touch any device technology, is “Why am I doing this?”

Kenton Williston: That was Amol Ajgaonkar, CTO for Intelligent Edge at Insight Enterprises. And I’m Kenton Williston, the Editor-in-Chief of insight.tech. Every episode on the IoT Chat I talk to industry experts about the issues that matter most to systems integrators, consultants, and end users. Today I’m talking to them all about the reasons IoT projects fail, and how you can avoid the biggest pitfalls. Did you know that up to 75% of IoT projects fail to meet their goals? Sounds scary, but, once you know a few of the industry secrets, you can join the successful group. In fact, I think one of the biggest secrets is just rethinking what success means.

So, Amol, let me welcome you to the show.

Amol Ajgaonkar: Thank you.

Kenton Williston: I’m really interested to hear more about what Insight does. And, of course, it’s going to be a challenge for our listeners, because I’m with the publication insight.tech, which is not at all the same thing as Insight, the company.

Amol Ajgaonkar: You’re right.

Kenton Williston: Why don’t you help clarify which thing Insight does, and what your role is there?

Amol Ajgaonkar: Absolutely. Insight Enterprises is a Fortune 500 company. In simple terms, essentially we do everything—right from procuring hardware, to imaging it, to installing it, to building the solutions that include mobile, desktop, web applications, AR, VR applications, to data and AI. Building AI models as well. And then managing those solutions—not just the hardware, but also the software for our customers. It’s a true end-to-end, service-oriented perspective.

Kenton Williston: Fantastic. What do you do as the CTO for the Intelligent Edge? What exactly does that mean?

Amol Ajgaonkar: That role, essentially, is helping the customers understanding what the requirements are, understanding what challenges they have, and then coming up with the right approach to make sure that our customers are successful, the solutions we build are successful and scalable as well. So, my job is to come up with a strategy and vision and understanding of what the market might need in the future, but also helping our customers be at the edge of innovation.

Kenton Williston: Well, I’m really interested to dig in more to that aspect of how you put a strategy together. But first, I want to talk a little bit more about this idea of the intelligent Edge.

Amol Ajgaonkar: Mm-hmm [affirmative].

Kenton Williston: This is certainly a huge focus area for IoT projects right now. When I think about the intelligent Edge, immediately what comes to mind for me are things like AI—which has become huge in just about every market segment. What does the idea of the intelligent Edge mean to you?

Amol Ajgaonkar: That’s one, right? AI is one use case for the intelligent Edge. But if you look at the data that is being generated right now, this is across industries. You’d consider manufacturing. You take into account retail, energy, healthcare—and you look at all the devices. You look at how people are interacting, you look at all the processes that are in place. All of those entities are generating data. Now, if you consider the amount of data that’s being generated right now, there is some intelligence in that data that can be taken out and made actionable. So, intelligent Edge for me is processing that data where it is generated, and then correlating that data with other data sets that are also being generated in that same area, right? Geographical area. And being able to provide actionable insights back to the users so that they can do their jobs.

Kenton Williston: Got it. Totally makes sense. And I think it’s pretty obvious why so many industries—you named just a handful of them just now—are so interested in IoT in general, and, in particular, in doing processing at the Edge to make better use of that data that’s available there. The value, I think, is really clear. But I have seen, despite that—despite how important these applications are—that a huge number of these projects fail. I’ve seen numbers as high as 75%, in some older studies. More recent studies I’ve seen—numbers upwards of 40% are considered to be unsuccessful. So, why do you think it’s the case that so many of these projects are ending up as failures?

Amol Ajgaonkar: That’s a very good question. I believe they fail because the definition of success hasn’t been defined. So, if you don’t define what success means for a certain use case, it is bound to fail. And that’s why, when you dive into solving any solution and say, “You know what? This is a great candidate for IoT or intelligent Edge.” The one thing that needs to be answered before you do anything, or even touch any device technology, is “Why am I doing this?” If the “why” is defined, then your solution is bound to be a little more successful. I’m not saying that if you’ve defined the “why” and you know what the ROI is going to be that every project will be successful, but at least you know that you’re going in the right direction.

So most of the time that they will fail is because they have unrealistic expectations from technology. They haven’t defined the “why.” They haven’t defined the ROI for that use case. So, once they prove it, and they have a pilot running, and they look at all of the services—all of the tasks that need to be taken care of before it goes into production—they look at the cost of that, and they’ll look at the ROI and be like, “Maybe it’s not worth it. Maybe I shouldn’t have done this.” Right?

Kenton Williston: Got it. I’m going to start keeping a little list here of some of the dark secrets—the dirty secrets that don’t get discussed enough. I’m going to make a point here—the number one is: you’ve got to know why you’re doing it. And right along with that—what success actually looks like. So I want to ask you about another thing that I suspect is going to be on your list. We’ve talked already about what intelligent Edge means, but of course the whole concept of the Internet of Things is about connectivity. It’s right there in the internet.

Amol Ajgaonkar: Uh-huh [affirmative].

Kenton Williston: Part of the name. I think this is one of the things that makes it really interesting. I, myself, come from, originally, an embedded-engineering background, right? And I think back to kind of the old days, where you would develop a system that oftentimes would be meant to just be left alone to do whatever it was going to do. It was very self-contained. And that’s really not at all what IoT is about, right?

Amol Ajgaonkar: Mm-hmm [affirmative].

Kenton Williston: It’s about that connectivity. It’s about taking that intelligence and sharing it out more broadly. So I suspect that part of what is happening here—in terms of why things go wrong—is that people might be getting stuck in kind of that older thinking of, “I need something to happen in my manufacturing facility. So I’m going to deploy a device that’s going to do its thing. Problem solved.” But really, that’s a wrong way of thinking about things. And thinking about a point solution at the Edge is really not thinking through things far enough. Would you agree with that?

Amol Ajgaonkar: Absolutely. I mean, there is a value for point solutions, right? Nothing against point solutions or collecting data, but the real value of an IoT solution or intelligent Edge solution is to be able to look at that data holistically. And now, on top of that, like you said, the Internet of Things—it’s connecting all of those together, and being able to control those actions as well.

Kenton Williston: Mm-hmm [affirmative].

Amol Ajgaonkar: It’s literally in milliseconds or nanoseconds that something happens—the system looks at the data, says, “Oh, I know what happened. I’m going to predict this could happen, and I’m going to change so-and-so parameters on this device.” So either it stops working, it changes its speed, or does so many different things. I don’t want to use self-awareness, because then it sounds AI-ish, but essentially the if-then kind of possibilities also come into place.

I think correlation of those data sets, being able to put all of those together, giving a holistic view—not just from one location then—right? So, connecting back to the cloud and being able to collect data from multiple locations, and using the inferences and actions taken by humans, actions taken by machines—and then bringing it all back to the cloud, and then analyzing that data holistically across locations—providing a much richer actionable Insight. And then pushing that expertise back into those locations, so that now the local AI models that are running are running smarter, because now they have additional data that has been provided for training and the model has improved accuracy, or is taking into account variance in their independent and dependent variables.

Kenton Williston: Makes sense. I’ll say number two on my list is just think beyond the Edge.

Amol Ajgaonkar: Mm-hmm [affirmative].

Kenton Williston: The next thing that comes to mind when you’re hearing all of the complexity you’re describing is, this sounds like a big chunk of work to bite off. And that gets me back to the question I wanted to ask about what you mean by having a strategic approach. So can you talk to me about that? What does it mean really to go beyond thinking about the solutions you need to deploy, to having an end-to-end strategy?

Amol Ajgaonkar: Absolutely. The strategy for any deployment of an Edge solution has multiple facets to it, right? Like I said, first to find the “why”—understand why you’re doing it, what is the ROI, what do I expect to happen? Once you have that in place and documented, understand which teams need to be involved as well. Not just the teams that are going to work on the pilot, but also the teams that are going to be affected by that solution in the future. Because you need to have the right buy-in from those groups as well. Because otherwise they will not adopt it. If they see this solution as something that’s going to add more work for them, they are going to resist. Humans resist change. So if you bring them on board earlier on and understand their pain points, understand how a certain solution is going to affect their day-to-day job and if it’s going to make them successful—they are going to provide more data to you, right?

Kenton Williston: Mm-hmm [affirmative].

Amol Ajgaonkar: So, getting those people on board is a good way to start. Then, looking at the physical landscape, “How many devices do I need? What type of integrations do I need? What type of data sets do I need? Who’s going to give me the data? Where can I get the data from? Is it a machine? Is it a human input? Is it cameras? Is it existing infrastructure that’s already developed? Is it the environment?”

All of those data points have to be listed out, right? And then, “Okay, these are all the data points that I need to make these types of decisions. If I have to get this type of data, where I’m on, where am I going to get that data from?” So you start listing it out, right? Try to get into the detailed planning of exactly where do you get the data from—who is involved, who is going to be affected by this change—and put that in your planning document.

Now, once you have that in place and you start building the solution out, as part of that you need to select or carve out a certain part of the solution—which is the most challenging part of that solution. And you prove that out. And so, once you have an acceptable rate, success rate in that, and you know, “Okay, if I get this type of data, I can make these decisions, visualize these decisions this way, and these are the people who will be engaging with the system and using that data.” So you put that thing holistically in place as the pilot. Once the pilot is successful, that is where—like we mentioned—we have to think beyond the Edge, right?

So if you want to take it to production, you have to think about scale. You have to think about security. I mean, security—you have to think about right from when you’re planning, and exactly how you’re going to secure the devices to how are you going to secure the software stack, the OS stack, as well as physical security. But when you get into production, it should have even more focus on security—as exactly, “If I were to ship this out to 10 locations, who is the person that’s going to install that? Where will it get installed? What type of security concerns will I have at each location if somebody were to just take the device and run away? If somebody were to try and plug a USB device into it and try to get access to it, what then?” All of those questions need to be answered, right?

So when you’re trying to get to production, you have to then plan the security aspects of it. You have to plan, “Where will I procure those devices from? Who will image those devices?” Because you’d need each device that’s coming out and being deployed to be the same. So you get consistency, then, at that point. And this is just all pre-production, right? You haven’t even reached production then. Once you have the procurement, once you have the installation, once you have the imaging, you have your security strategy in place—then comes deployment.

The first time you’re going to deploy, you try and deploy to one or two locations. As part of deployment you have to plan for who is going to deploy those—what kind of effort is required in deploying a solution like this? Is it cameras that you have to go and install? If so, do you have the wiring in place? Do you have the electrical in place? The networking in place? Is your networking infrastructure capable of handling the additional load? Will that affect any other existing systems that are already in place, right?

All of these things have to be planned before you start deploying to production. And some of these things are missed. Going back to why projects fail—if some of these things are missed in planning, and so when they actually deploy, then they realize, “Oh, for this, I need to upgrade my network.” Or, “I don’t know who’s going to install the cameras, or who’s going to integrate into the PLCs.” And so on and so forth. So all of that needs to be planned.

Let’s say we have all of that planned, and we’ve deployed now to one or two locations. Then comes, who’s going to manage these, right? Once it goes out of your facility and the solution is in production and it’s at the location—well, it’s on its own.

So now you have to think about, “How am I going to manage these?” The devices, as well as the workloads, the software that’s running on it. And when I say software, it includes the OS. “Who’s going to patch it? How are we going to patch it? What is our strategy for patching?” And if something were to change and I need to update my software stack—let’s say I’ve got containers running and I need to update those containers, how am I going to do that at scale, right? I don’t want anybody plugging any USB drives into my machine to update anything. So what does that mean? And do I have the right teams in place to support a solution like that? Or should I rely on partners to come in and help me support the manageability of those devices—the end points, as well as the software stack, right?

So, management and support—or monitoring after the fact—is also super important for a successful solution. All in all, it does seem complicated. It does seem like, “Oh my God, there’s so much to do to make this successful.” But if you rely on partners, and if you have a good plan in place, it’s actually not that hard. It is just like any other project, where if you plan and do it right and take into consideration all of these aspects, the solution will definitely succeed.

Kenton Williston: Yeah. So let me recap here. We’ve covered a lot of ground. I’ve got literally a bunch of sticky notes here. I’m running fast and furiously with all these great ideas. It sounds like some of the key points where people get tripped up are—first of all, not really having a clear vision for why they’re doing what they’re doing. Number two, thinking too much about a solution, and not thinking holistically, end-to-end about how this is going to work as a system. Number three, not having a well-thought-out strategy. And the point I really liked that you raised there was about having the people, right? Like, this is not just a pure technical question, but it’s an organizational and personal kind of effort, where you need to have everybody on board and aligned with your objectives.

Number four, you need to—once you’ve proven out that the basic idea is viable—think very carefully about how you’re going to scale it, secure it, and deploy it. And again, a lot of that has to do with the “Who’s doing things?” as much as it does with the “What you’re going to do?” And then number five—there are all the questions around, “Okay, you’ve got it. It’s working. It’s successful. Now, have you thought through how you’re going to maintain and manage these devices?” And even things like the life cycle management, right? Like, eventually, these devices will need to be retired, and who and how is that going to happen? It sounds like we’ve got at least six key areas already where people can easily overlook some of the big challenges.

Amol Ajgaonkar: Absolutely. And I think I really focus on the people aspect of the solution. Because if the people who are affected by the solution don’t really think that it’s going to add value to them, they’re not going to use it. So you might actually even go to production and make sure that all of these other things are taken care of. But the reason you’ve built the solution—and for the people, right? And if they don’t use it, it’s of no use, right? It’s waste of money at that point. So making sure those people are on board, and that they understand how this is going to make their life easier or provide a more actionable data or information so that they can do their job in a much better way. Once you have them on board and help them understand that part, they will adopt. And they will ask for it, and they will give you feedback on what needs to change and what more features they would like to see in that system. And that is how that system will transform, right?

So it’s not a static solution that you build once and you’ll deploy and you’ll forget about. It’s actually a transformational solution. If you look at it that way, it has a lasting impact on the organization as well, because people get excited about the value that it’s adding, and they want to make it better. And so, that is how the digital transformation of any organization would come through—is because the people are now passionate about changing and adding new features to a solution, or adding new revenue streams. And how do I do that? “Oh, maybe we could change this solution, or we can change that solution.” Because they feel a part of that solution itself.

Kenton Williston: Yeah, absolutely. And I would go beyond that, and even say it really requires a different way of thinking about what an IoT project even looks like, right? It’s not the sort of thing where you ever really get to a finished date. Because what you’re really doing is you’re creating a platform for ongoing improvements, ongoing innovations—both from the angle you were just describing, of people contributing new ideas, and even just the devices themselves becoming more intelligent. Using, for example, as you talked about earlier in our conversation, the machine learning models to just improve the basic algorithmic performance of what you’re able to accomplish over time and have that continuous feedback loop.

Amol Ajgaonkar: Absolutely. I agree.

Kenton Williston: So, two big questions that come immediately to mind for me out of this. First of all, this doesn’t necessarily have to be a big, scary list of things to contend with. But it is a very different way of thinking about IoT projects—certainly, like I said, compared to earlier in my career about how people thought about embedded designs that were kind of very isolated. There’s a very clear “You’re done with it” at a time in the calendar. This is a very different way of approaching projects. I’m easily imagining our listener saying, “Oh, I’m really excited about everything I’m hearing. I’m going to go take this to my boss and explain how we should do things different.” And the boss just kind of like passing out in their chair because they’re so overwhelmed with everything they’ve got to do now.

So, number one, how can our listeners communicate to their colleagues “Hey, this is why we want to do things a different way”? And, number two, how can Insight Enterprises help folks execute along these lines?

Amol Ajgaonkar: So, to think about a solution, right? How do you communicate the approach? The approach has to be—first, be okay with ambiguity, because nobody has all the answers. But as long as you define why you’re doing this, everything else will fall into place in due time. So, be okay with ambiguity. It’s fine, because the problems that businesses face never come with a manual on how to solve them, right? It’s always a new or newer problem. There’s a disruption from a new startup or a newer idea, and now you have to compete with that, right? That is why a business is exciting, is all of these challenges that come up.

And so, defining what we need to do, and then going step by step—take smaller approaches, carve out a smaller piece. But even for that smaller piece of the puzzle, think holistically.

Kenton Williston: Wow.

Amol Ajgaonkar: Keep your eye on the goal. It’s a journey, right? There’s not an end state.

Kenton Williston: Mm-hmm [affirmative].

Amol Ajgaonkar: But define your milestones in that journey. I want to get to this milestone and move on from that. I just—I do that even before, when I’m working out. I was like, “Ugh, I’m getting bored.” Like, “Nope, let’s just try four more days and then we’ll set a new milestone, and then the next milestone, and so on and so forth.” So have a plan to find your milestones, and then go tackle that milestone by milestone. That is what we do as well. With Insight, we have our approach on how we go and talk to our customers, which is we try and ask them, “Why are you doing this?”

Literally, there have been cases where, when we haven’t had a clearer understanding of, “Why is that customer really doing that?” We’ve actually told them, like, “Maybe we can help you discover why you’re doing this. What is the ROI?” And sometimes by the end of it we figured out, “Well, the ROI is not really that great.” And the customer’s like, “Oh, this is great. Otherwise, I would have spent so much money trying to figure out and gone through with this, and I wouldn’t have any value.” Right? So our approach of going through envisioning sessions and understanding exactly why are you doing this—defining that, documenting it—and then communicating that message across all stakeholders so that everybody understands why we are doing this. This is part of the buy-in process.

So we get the buy-in as well. We can help in different aspects of that journey. It doesn’t have to be that Insight will do everything or nothing. Nope. We will come in and help with whatever challenges you have. If you want to define a strategy or a design for the solution, we can help you with that. And you say, “Hey, we just need that. We’ve got everything else covered.”

Great. We are happy to help in any which way possible to make our customer successful. We’ve done end-to-end as well, where we’ve gone and we’ve defined the vision for them. We’ve executed the vision for them. And when we say execute—right from integration into their existing systems, to procurement of hardware, to imaging, to installation, to monitoring of those end points, to building data and AI models that deploy at the Edge, to building mobile applications, web applications, desktop applications, applications that run on AR, VR headsets as well.

So, depending on what the customer needs, Insight can come in and help and deliver the outcome that they’re looking for. In our approach, envisioning is step one. We need to be convinced that you really are going to benefit from this solution as well. And then the customer has to be convinced that, “Yes, this is the right solution for us.” And after that, you plan for all of the things that we talked about, right? Plan for security, plan for people being affected positively by that, getting them onboard, interviewing those teammates as well, and then the technical architecture, and then planning the technical implementation or development as well. And then once it’s done, we can actually go ahead and look at support and how we can help with the level-one, level-two, level-three support.

If you need device recycling, you come in and swap the devices out. If something goes wrong—or even monitor these Edge end points and be like, “Okay, we can see that something went down. This device went down. Hey, Mr. Customer, your device went down. We already know that. We are working on fixing it.” Right? All of those kinds of support services, we can help our customers with as well.

Kenton Williston: That’s really cool. I will say though, I love this big picture of coming in with the envisioning at the start, carrying all the way through the device maintenance at the end. Awesome. Totally love it. But there’s kind of a hard reality here, which is, at the end of the day, folks are going to have to pick some point solutions, even though that’s very much, as we’ve discussed, not really the entirety of what makes for a successful IoT deployment. That is a critical step. I’m betting that you’ve seen plenty of cases where people really got hung up by choosing the wrong solutions—whether it was like the wrong hardware, the wrong cloud platform, the wrong OS—whatever it was. So, just from that pragmatic point of view, how does Insight approach that question of figuring out what are those actual solutions that go into this mix?

Amol Ajgaonkar: Right. It all comes down to two things in my mind. One is cost, right? Nobody wants to spend money building a solution from scratch. That’s why the point solutions or off-the-shelf solutions make sense, because I can just go and buy and test, and I don’t have to spend so much time and money in building a solution. Makes complete sense. When it’s a brownfield situation like that, where they might already have certain solutions deployed, we also work with those solutions to integrate, right? At the end of the day, what do we need? We need the data that’s coming out of that system. And hopefully, ideally, if we can programmatically manage the hardware or that solution that is deployed, then that’s the golden state. But at least the minimum—if we can integrate and get the data out, we can provide some more value back to the customer. So we look at integrating with the solutions that they might have already deployed.

Sometimes it actually helps as well, because if it’s a point solution and it’s a really stable, robust solution that has open APIs and you can integrate and you can manage those devices, that’s actually a fantastic place to be as well. Because what that does is allows you to separate the responsibilities of the solution. One is data collection, right? And if there are sensors with their own gateways and they have taken care of security, they have taken care of ingestion and reliability, and they are doing that with higher accuracy. We can rely on that device and that solution and pull the data in, right?

So, we do test out devices and sensors from our partners to see how they perform. Even from a battery-life point of view, what would it take to really—we push it, essentially. We say, “Okay, if I were to ask the device to give me data points every half a second, how long will it last?” And we work with the partner as well, like, “Hey, I’m going to do this. You tell me how long will it last. Would I start to see gaps in data?” And stuff like that. We in fact do work with our partners as well to bring in the right sensors, to bring in the right gateways, and then deploy those as part of our solution. We don’t always have to build everything from scratch. We rely on our partners a lot, and bring their solutions in to provide that big-picture, holistic solution back to our customer.

Kenton Williston: Yeah. I think one critical thing that I want to just dive a little bit deeper on there was a point you made about using open, scalable kinds of solutions.

Amol Ajgaonkar: Mm-hmm [affirmative].

Kenton Williston: In the interest of full disclosure, the insight.tech program is Intel owned and operated, so this is a little bit of a self-serving question. But I would imagine that the fact that you’ve got all these partners who have a bunch of different Intel-based hardware—which is very really understandable by IT departments—you’re not doing something particularly novel with the hardware. And it’s very scalable, in the sense that you’ve got things that are made for those extended-battery-life sort of scenarios you were talking about, all the way up through things that are huge horsepower, giant iron machine–sort of applications. The fact that you’ve got that scale is pretty helpful.

Amol Ajgaonkar: Right. And so, from an Intel point of view—we work with Intel a lot. And not just on the hardware side, but also on the software side, right?

Kenton Williston: Mm-hmm [affirmative].

Amol Ajgaonkar: Using the frameworks that Intel already has, like OpenVINO or OpenAMP, and looking at how they’re designed—it really helps leverage whatever infrastructure that the customer already has. Which is great, because cost is a big factor in building the solutions, right? If I can go back to the customer and the customer says, “You know what? I’ve got these Intel-based servers, or these smaller devices that I already have in my facility. Can you reuse those?” And if the answer is “yes,” it’s amazing, because I’ve just saved my customer a ton of money. They don’t have to spend money in buying new hardware at that point.

So, using OpenVINO, I can run AI models and leverage CPU and integrated GPUs, or then even add just a card to their server and say, “Okay, let’s use this FPGA.” Right? But you can start with the CPU. You don’t have to spend money at the beginning to prove out your concept. And that’s why I like the open systems, where I can integrate and I can leverage these frameworks which will add value back to my customer. Same goes with OpenAMP in being able to manage these devices. Doing out-of-band management for the devices is so critical. And part of that is—monitoring comes into play as well, right? If I want to monitor these devices that are deployed out in the field, I need to be able to do that.

I’ll give you an interesting example here. Five years or so ago, we deployed a solution out in the field. When I say a field, it’s an orchard. This was the very beginnings of Edge processing and stuff like that. So, we deployed that solution. I flew back. And after, I think seven days, somebody turned that device off. The stakeholder called me. He was like, “Hey, this is not working.” And I looked online, and I was like, “The device is not online. Can someone go in? I think it’s turned off. Can someone go in and turn it on?” And they’re like, “Well, it’s in the middle of nowhere. There’s nobody there. Can you go and turn it on?” So I had to fly back just so that I can press a button, and then fly back home, right?

If I had out-of-band capabilities on a smaller device like that with Wipro on it, I could have OpenAMP, I could have just been like, “Oh yeah, somebody turned it off. Hold on.” Click a button. And now I’ve turned it on remotely sitting a thousand miles away from that location. So it’s having those integration points, having the frameworks in place, I think really benefits not just integrators like us, but it really benefits the customers.

Kenton Williston: Perfect. Let me see if I can sum up. I think I’ve got seven good, dirty little secrets here. Let’s see if I’ve caught them. First of all, have a clear vision for the what and why. Second of all, make sure you’re thinking beyond the point solution, beyond the Edge, and think holistically about what the whole system needs to do. Third, have a clear strategy with milestones, and most importantly, get all the stakeholders to buy in before you start. Fourth, once you’ve proven out the basics of your concept, really be sure to think through how you’re going to scale it up and secure it and deploy it. And again, a lot of that comes down to who is doing this. Fifth, make sure you’ve got a plan in place for managing and maintaining all these systems over time—which, once again, like we were just talking about with your example in the airplane, a lot of this comes down to who is actually going to do this.

Sixth, keep in mind that this whole thing is a journey. You don’t really have an end goal and that’s fine, because really this is more of an exercise of opening up opportunities for ongoing improvement rather than just solving a small problem. It’s really about opening up the possibilities. And then, finally, seventh, be sure to incorporate in your thinking across all of these stages what kinds of existing infrastructure you’ve got, how to best make use of that, and how you’re going to integrate into that infrastructure to be as efficient as possible and satisfy all the various stakeholders that we’ve been talking about. Sound like I’ve got it?

Amol Ajgaonkar: Absolutely. Just to reiterate—I mean, security is a big component. And so, always think about security across all the efforts, whether it’s the hardware, it’s the software stack. And look for frameworks that have already established and been tested rather than trying to build security frameworks from scratch. It is not an easy undertaking. You’ll spend millions of dollars and still not be able to get to the level of stability that some of these frameworks already have, because they have spent the millions of dollars in years thinking about security and making it secure.

Kenton Williston: Fabulous. Any other key takeaways that you would like to leave for our listeners?

Amol Ajgaonkar: Just one key takeaway would be that it seems complicated, and it might feel like it’s a lot of effort, but truly, with the right partners in place, it makes that solution easy to build, deploy, and see the value. Maybe it’s just that I’m passionate about the Edge and solutions at the Edge, but I feel there is a huge value for our customers in building solutions at the Edge, and then managing these solutions or these workloads through the cloud for scale. So, definitely, take a look at that. It’s not all hype. There is some real value in the solutions. It’s just a matter of realizing where that value is.

Kenton Williston: Well with that I would just like to say, thank you so much for joining us. Really appreciate your insights.

Amol Ajgaonkar: No, thank you so much for having me. This was a wonderful conversation. Really appreciate it.

Kenton Williston: And thanks to our listeners for joining us. To keep up with the latest from Insight, follow them on LinkedIn at linkedin.com/company/insight, and on Facebook at Insight Enterprises, Inc. If you enjoyed listening, please support us by subscribing and rating us on your favorite podcast app. This has been the IoT Chat. We’ll be back next time with more ideas from industry leaders at the forefront of IoT design.

Safety and Security Trends: How SIs Succeed

“We aspire to be a beacon in the industry, leading innovation in a responsible and sustainable direction. One of the things we look at is how we can transform video technology in a way that it’s used in the security industry, but also in other applications.”

Such is the heartfelt vision of Thomas Jensen, CEO of Milestone Systems—a global leader within open-platform video management software. This transformation goes beyond products to creating real value for customers and for society.

Today’s video technology is capable of so much more than creating evidence to catch bad guys.

One example is how citywide traffic infrastructure can go further than issuing red-light jumping and speeding citations, to monitoring rush hour density on city streets. As roads get more congested during more hours of the day, using video, authorities can redirect traffic through less crowded roadways.

“We’re looking at the ways we can transform how our technology is being used,” says Jensen. “How can we reduce pollution? How can we reduce time spent on roads during rush hour? We want to contribute to the well-being of citizens by doing something meaningful for society and the environment.”

Ecosystem for Video Solutions

Because no single vendor can deliver on this promise, Milestone embraces and promotes a partner ecosystem through open platforms, matchmaking, and enablement. These partnerships encompass the entire food chain, from technology integration, to sales, services, and support.

It starts with the company’s Milestone XProtect video management software (VMS). The open platform enables technology vendors and systems integrators to build solutions around camera manufacturers, application providers, and software developers, all working to create integrations and extensions to XProtect (Video 1).

Video 1. Open platforms and partnerships provide flexible, expandable video solutions. (Source: Milestone)

The second element is the Milestone Marketplace, an ecosystem of software, hardware, and service providers. Technology partners can find one another to integrate solutions, and customers can find the solutions, applications, and market expertise they need.

“We work with partners across the ecosystem… anywhere from IoT and sensor providers to big data analytics companies,” says Jensen. “Together, they can process all the data coming from the cameras to our software into actionable knowledge that customers can use to drive business outcomes and value.”

For example, companies that have specific use cases in traffic management, traffic monitoring, or emergency response can connect to build solutions. The Milestone ecosystem includes more than 600 product and service offerings using popular technologies such as analytics, access control systems, management and operator software, and IT infrastructure.

The third leg of the stool is the Milestone Channel Partner Program, which supports the technology providers and integrators that are bringing these integrated solutions to their customers.

Jensen sees the combination of these three elements as the path forward to achieving the company vision to drive video innovations in a responsible direction—for both safe and sustainable communities—and the best possible solutions and business outcomes.

“Our view of video technology and the accelerated innovation we see has to be built around responsible technology and a universal approach.” —@tjensen1973, @milestonesys via @insightdottech

Systems Integrators Extend Their Reach

Expanding their video offerings beyond traditional safety and security offers new business opportunities for systems integrators.

“As technology advances, you see more sophisticated systems, sensors, IoT devices, and so forth, expanding video use cases across verticals,” says Jensen. “Video specialists will find these opportunities in selling business outcomes versus products, and by looking to new applications, gaining more traction in their key verticals.”

SIs understand the specific needs of their particular industry. Milestone provides the platform for technology partners and integrators that have the right competencies to deliver the solutions for customers.

The healthcare segment is a good example of how this is happening. Adding new tech like AI-enabled software can provide real-time fall detection—preventing or minimizing injuries to the elderly or other at-risk patients. Expanding existing video infrastructure provides increased patient safety, measurable benefits to hospitals and clinics, plus new opportunities for healthcare SIs.

Shared Outlook for the Future

Jensen describes Milestone’s partnership with Intel as a match made in heaven. The company’s vision for the future is well aligned with Intel’s RISE strategy—to create a more responsible, inclusive, and sustainable world—enabled through technology. And the underlying technology ensures the system performance and reliability that video solutions need.

“It’s important to recognize the partnership we have with Intel, but also with all of our integration partners in the industry,” says Jensen. “And I want to stress that our view of video technology and the accelerated innovation we see has to be built around responsible technology and a universal approach. This is how we run Milestone.”

HVAC SIs: Leverage Stimulus for K-12 School Upgrades

Many U.S. public schools have aged HVAC systems, leading to wasted energy. Inefficient systems not only cost money, but they can create less than ideal environments for occupant comfort and performance. According to a U.S. Government Accountability Office report, 36,000 public schools need to update or replace their faulty HVAC systems.

Because school funding is primarily directed to educational programs, repairs and maintenance often get deferred. Many school districts have also missed out on upgrades to more efficient digital HVAC systems.

But school districts that couldn’t afford to make improvements now have a new window of opportunity. In March 2021, the U.S. government allocated $122 billion to schools as part of a COVID relief program. These federal stimulus dollars are available for upgrades that increase energy efficiency and improve air quality—a serious concern in the wake of COVID.

Improving Efficiency with AI Controls

Though most schools have some light and energy sensors, they exist in a vacuum, unconnected to equipment that is older or made by different manufacturers. The data they collect is fragmented, and building managers don’t have access to ongoing analytics that reveal important trends.

But by connecting HVAC and lighting components to a unified, cloud-based IoT platform, schools can see energy use in real time, set schedules and alerts, and spot and resolve potential problems before they happen.

“A smart-building system helps schools achieve energy efficiency in many ways. Even something as simple as setting schedules so that heating and cooling don’t run at the same time can make a big difference,” says Tim Vogel, director of IoT for KMC Controls, a provider of building automation and control solutions.

The company’s automation solution, KMC Commander, can attach sensors to most equipment, no matter how old it is or who made it. Its Intel® processor-based gateway sends sensor data to the cloud, where managers can view everything on a single, secure platform. Users can set alarms and make changes from any device, anywhere.

A cloud-based platform relieves burdens from school IT administrators, who would otherwise need to use multiple VPNs or set up and manage a local computer network to access building sensors.

IT departments also avoid the nuisance and expense of technology upgrades. For example, many schools use Adobe Flash Player for remote monitoring communications, but Adobe recently ended support for the product. “Upgrading from Flash to newer software can cost $9,000 to $12,000,” says Jesse Shoemaker, director of OEM sales at KMC. With KMC Commander, all software upgrades are done automatically in the cloud without extra charges.

Support and Scale

KMC’s teaming with Arrow Electronics, an Intel® Solutions Aggregator, plays a key role in supporting SIs as they bring the solution to market. “As a manufacturer, we’ve had a longtime relationship with Arrow,” says Vogel. “From a deployment perspective, the Arrow team brings a huge amount of talent in understanding what’s going on in the IoT space, the convergence of IT and OT, and sales.”

And while SIs have the domain expertise, they often need support from a technological and business standpoint as technology advances. “I think it boils down to two things: support and scale,” says Roland Ducote, Arrow IoT solutions director. “We support KMC with go-to-market. Part of our job is to recruit SIs into the ecosystem and introduce them to technology like the KMC Commander.”

Energy Control in Action

The experience of one of the nation’s largest school districts shows how centralized IoT-based controls bring benefits to schools.

“HVAC systems in this district spanned the gamut from well-functioning to old and poorly functioning,” says Vogel. In the winter, dampers sometimes became stuck open, drawing in cold air that caused heating coils to freeze. Replacing coils—a frequent occurrence in the district—cost between $4,500 to $35,000 per maintenance visit, depending on how many coils were affected.

Since connecting HVAC equipment to the KMC Controller solution, the district has gained the visibility to prevent such problems. “There was a huge amount of fault detection and diagnostics on root-cause issues,” Vogel says.

In addition, analytics revealed trends enabling the district to reduce its energy load and earn rebates from utility providers. “The monitoring-based commissioning agent used this data to identify hundreds of thousands of kilowatt hours to be saved, equating to nearly $700,000 in savings in two schools alone,” Vogel says.

The district is now implementing air-quality monitoring. Sensors will measure temperature, humidity, and concentration of carbon dioxide and particulate matter, including bacteria and viruses. Particulate matter can be reduced by increasing fresh-air intake and using filters—steps recommended by the EPA.

Better air quality may even improve student performance. “Studies have shown that optimizing indoor air quality makes employees more attentive and productive. It stands to reason that it would help students concentrate better and do their best work,” says Jason Mills, director of marketing and communications for KMC Controls.

Some innovative schools are incorporating IoT sensor data into the curriculum. Using the KMC Commander, Stuart Country Day School of the Sacred Heart, a private school in Princeton, New Jersey, shares some KMC sensor data with students, giving them the unique opportunity to use the data in the classroom, a practice normally limited to colleges. The KMC Commander, along with sensors and building automation controllers, helped the school reduce its energy consumption by 45%.

Further Automation on the Horizon

Even after the stimulus program, schools are likely to continue improving their building management systems—especially when they see automated controls start to pay for themselves over time. With centralized infrastructure and reporting, schools can easily demonstrate their utility spending and savings. Larger school systems can document efficient energy use to monitoring-based commissioning agents, who may help them qualify for utility rebates.

It all adds up to a greater push for automation. “There’s an anticipation of a huge amount of work coming from schools in the next few years,” Vogel says.

Some districts may incorporate new features, such as adding solar systems with battery storage. Others may adopt security cameras and people sensors to monitor social distancing or fire safety capacity, or they may work with transit systems to optimize traffic flow to and from schools. SIs can help them plan for future needs with KMC Commander, which connects with any system using standard building protocols.

“There are a million ways you can go,” says Mills. “The technology is full of potential.”

Industry 4.0: From Physical Connectivity to the Cloud

Forward-thinking manufacturers look to transformational technologies like edge AI and computer vision to increase agility, boost productivity, and lower costs. But who would have thought that simply connecting industrial endpoints to the cloud would be a substantial obstacle.

This is a hardware challenge stemming from the sheer number of industrial communications protocols that exist today. There are hundreds of standard and custom industrial protocols. And unfortunately, many of them use different physical interfaces.

Consider just the most popular industrial protocols. Serial, display, industrial Ethernet, digital, and general-purpose communications all leverage different physical interfaces. This diversity can be problematic for off-the-shelf gateway solutions with a standard set of interfaces because many industrial applications require a custom mix of I/O. Space and resources are also at a premium in industry, meaning you just can’t afford extraneous connectivity that is rarely, if ever, used.

The lack of hardware interoperability can force prospective industrial IoT users down one of two paths: Either daisy-chain different protocol translation gateways to get all the I/O your application needs, or build a custom solution.

Either way, the outcome can be costly.

Interoperability and standards are the building blocks to accommodate all of the equipment in a factory—modular hardware that supports open software is a good place to start.

Flexible Interfaces for Application-Specific IoT

American Portwell Technology, a provider of industrial PC and embedded computing solutions, offers an alternative. Its Modular KUBER-2000 Series takes a hybrid approach to the I/O challenge that offers the cost benefits of off-the-shelf systems while still providing flexibility (Figure 1).

Portwell gateway products show the range of I/O ports available
Figure 1. The Portwell KUBER-2000 Series of industrial gateway solutions meets a range of diverse IIoT I/O requirements. (Source: American Portwell Technology, Inc.)

Each of the six KUBER-2000 IPCs leverages a common technical foundation based on dual- or quad-core Intel® Celeron® N3350 or Intel Atom® E39XX processors and a standard I/O suite. These interfaces include:

  • Dual Gigabit Ethernet ports
  • At least two USB 3.0 ports
  • Two-pin terminal block
  • DisplayPort connection

And beyond these, Portwell expands the connectivity in different ways across its SKUs to address application-specific needs. For example, the KUBER-212A offers noise-isolated LAN and COM ports to improve system fault tolerance, while GPIO and CANbus support available on the KUBER-212B allow this system to be used for automation control.

Collectively, the KUBER platforms are capable of wireless communications that include Wi-Fi, Bluetooth, GPS/GNSS, LoRa, and 4G/LTE. And MQTT, OPC-UA, Profinet/bus, Modbus, and the PLC 61131-3 Software Stack. These are just a few of the automation-centric protocols natively supported by the platforms and their varied interfaces.

By building on a common, modular hardware architecture, the portfolio of off-the-shelf Industry 4.0 gateways can function as anything from an IoT edge manager or PoE switch to a SoftPLC or automated guided vehicle (AGV) controller.

A Universal Connection to the Cloud

It’s essential that these systems can talk at the fieldbus level to communicate important data to one another, as well as the cloud. While full platform interface and protocol stack support are great for edge connectivity, they still need assistance transmitting information to the enterprise.

Portwell engineers accomplished this by adopting the Standardization Group for Embedded Technologies (SGeT) Universal IoT Connector (UIC) standard as part of the KUBER-2000 software stack.

UIC is a connectivity framework that allows embedded devices to exchange data with cloud infrastructure using either the MQTT or XRCE message protocols. UIC connects disparate endpoints that speak different protocols into a single, heterogeneous network of hundreds of devices.

The open-source, hardware-agnostic industry standard is composed of three parts:

  • Embedded Driver Module – The drivers that connect the embedded system to peripherals or other systems via interfaces like those described above.
  • Communication Agent – A software component that initializes and communicates with a cloud-based application server.
  • Project Agent – A configuration tool that instructs the embedded system on which peripherals to use, how to process data, and how frequently that data should be transmitted to the communication agent.

These UIC modules enable remote management and connectivity to more than 500 cloud solutions. One of these, Microsoft Azure IoT, recently certified the KUBER-2000 Series as part of its ecosystem.

Industrial Standards Require Industrial Reliability

The industrial IoT offers the potential to revolutionize manufacturing operations. But its benefits cannot be realized if the devices running edge-to-cloud applications—from predictive maintenance to employee safety—are unable to communicate efficiently and cost-effectively.

Interoperability and standards are the building blocks to accommodate all of the equipment in a factory, whether it’s new or legacy. And flexible, modular hardware that supports open software is a good place to start.