Powering the Future: Grid Modernization Efforts in Action

It’s not breaking news to say that there’s an extraordinary demand for electricity in today’s world, as well as the need for that electricity to be reliable, affordable, and—increasingly—sustainable. What many of us don’t spend much time thinking about, though, is how much work goes on at the back end to make this possible. And, beyond that, how energy requirements can even be a driving force for innovation in the power grid.

Our panel of experts from Intel, Dell Technologies, ABB, and VMware gets into the nuts and bolts of grid modernization. The panel includes: Prithpal Khajuria, Director of Energy and Sustainability at Intel; Anthony Sivesind, Edge Solution Architect at cloud computing company VMware; Jani Valtari, Technology Center Manager at the industrial digitization leader ABB; and Russell Boyer, Global Energy Field Director at technology company Dell Technologies. And they’re all deeply committed to servicing not only the power requirements of today but also the requirements of tomorrow.

What is the current state of the power grid and its recent evolutions?

Prithpal Khajuria: The grid architecture has not changed in a hundred years, but in the last decade we have started shifting towards renewables, where the most important thing is the penetration of renewables at the edge of the grid—in homes, businesses, parking lots. There we have started deploying large-scale renewable energy, mostly solar, and that has started pushing energy back to the grid.

The grid was designed as a one-way highway of electrons moving from utilities to homes and businesses. But this addition of renewables at the edge of the grid has started a two-way flow of electrons. A system was designed to operate one way, but now we have to adapt it to the new scenario. That requires us to rethink the architecture of the grid—how can we add more intelligence technology into it to get better visibility and faster decision-making capabilities going forward?

Anthony Sivesind: I’ll add onto what Prithpal just said. I agree—what once was a one-way power flow is now seeing a great shift; what once wasn’t a problem for utilities now is. And along with the power flow we’re seeing an increase in the penetration of point loads, an increase in the density of loads. Two examples of where we’re seeing that is with data centers and electric vehicles.

That is a challenge for utilities, along with an increase in extreme weather events and physical cyberattacks, all while maintaining this aging grid infrastructure. What VMware wants to do is to help implement a flexible platform so those utilities can improve their capabilities.

Russell Boyer: What the utilities have to do is figure out how to take these various challenges—like weather events and cyberattacks—and add more intelligence, add more operational capabilities to turn that data into insight, and ultimately to improve the reliability and the resiliency of the grid.

Jani Valtari: It’s a tricky challenge. We need to increase the amount of renewable energy; we need to decarbonize the energy sector. At the same time, a bigger part of the society is going to require electrical energy. So we need to be at the same time very flexible, very adaptable to renewable generation, but also more secure than before. And the way to do that is to add more digital technologies. And to do that in an affordable way, we need standardized platforms—scalable solutions that can be widely deployed to many different locations across the globe.

Where are the biggest opportunities for grid modernization?

Prithpal Khajuria: What we have been doing historically is building a model-driven grid, and building it from the top down. But now we need to go bottom up, by building intelligent, data-driven systems at the edge of the grid—which in this case is the substation. So how do we build the intelligent edge and then use it to collect more data, normalize that data, and then extract more intelligence for greater visibility and faster decision-making?

We can address these challenges, and those of meeting ESG goals, by maximizing the use of renewable energy. And the only way we can maximize the utilization of renewable energy is by having greater visibility and insights. That’s what Intel sees—building a data-driven grid going forward.

“Now we need to go bottom up, by building intelligent, #data-driven systems at the #edge of the grid” – Prithpal Khajuria, @intel via @insightdottech 

How do you see emerging technologies being used to meet the needs of today?

Russell Boyer: Dell technology has been investing in edge and IoT for several years now, in order to harden our overall compute infrastructure and be able to offer more capabilities out at the edge. So in order to support all of this automation and real-time operational decision-making, we need more capabilities, more compute, out at the edge in the substation. And that’s just to be able to meet the requirements of today.

If you look at sustainability targets, we’re going to have to have a landing place for the AI models of the future. Today we’ve got aging infrastructure in the substation, and we really need to modernize that, and modernize it at scale, so that we can not only meet the current requirements but also those of the future.

In one example, as we start having more virtual power plants, where there’s a significant amount of generation on the distribution side, we’re going to need to improve those operational technologies to better manage that, and to achieve those ESG targets that Prithpal mentioned to make sure we favor those sustainable sources of energy.

Jani Valtari: The traditional way of handling protection control in a substation has been to use devices that you install once, and then you let them run for 10, 15 years and don’t need to touch them. Now we actually need to change the environment on a very frequent scale.

We also need to make our designs more data driven, not just so that we can collect data and get some insights but so we can react fast based on data, even in the millisecond scale. You can run things on a virtual platform and really quickly adapt whenever there’s a need to make a change in the network.

How is Intel tackling grid modernization?

Prithpal Khajuria: Intel is looking at grid modernization from multiple angles. One angle is talking to the end customers—in this case, the utilities—first. What are the challenges they are facing? How can technology help them? One of the biggest challenges we see, which Jani touched on, is the penetration of these fixed-feature function devices; they were designed to do one thing and only one thing. So Intel put together a team to build the next-generation infrastructure to standardize the hardware, and to disconnect the software from the hardware.

Intel provides the core technology, the ingredients, which is our silicon and the associated technologies around it. Then Dell comes with its technologies; its capabilities layer on the top. Then VMware comes with its software-defined infrastructure on the top of that, and then ABB comes with the power-centric technologies on the top of that. That is what the Intel vision is—bringing the whole ecosystem together to build this scalable infrastructure that can accelerate the adoption of technologies in the utility sector to drive the goals that each utility or each country in the world has for maximizing renewables and minimizing fossil fuels.

What is the value of partnerships and coalitions for grid modernization?

Jani Valtari: We’ve been looking towards a software-oriented approach already for two decades—trying to really shift things from hardware-centric to software-centric, and going from model-based towards data-based, from fixed systems to very volatile and fast-changing but still super-reliable systems.

Recently we released the world’s first virtualized protection and control system. But we cannot do this whole thing alone, so it’s been very good to have solid collaboration. For example, we need super-reliable hardware to run the algorithms, so there’s hardware development with Intel and Dell. Also, we are not experts on the virtualization environments, and the collaboration with Anthony and VMware has also been important for us.

Russell Boyer: We’ve got to create a coalition of the willing in order to innovate. Intel has done a great job of bringing together a coalition of various software and hardware vendors, together with clients, to really put together a standard—we’ve got to influence the standards.

The other thing is we’ve got to have the collaboration with all different types of partners. As we move forward, we want to make sure that we have a whole portfolio of options to be able to support these modern platforms at the edge.

Anthony Sivesind: And not just with the partners here either but also with the utilities—I want to tip my hat to Intel for engaging all the utilities. Intel has spurred the industry with a couple of coalitions in that realm: E4S in Europe, the vPAC Alliance in America. And that’s a great chance to build those standard specifications that Russell mentioned. 

Tell us more about the importance of those industry standards.

Jani Valtari: In order to go in the direction where a solution is scalable and can be widely used in different places, we need to do everything based on global standards. In the power sector the key standard is IEC 61850. It has standardized items related to hardware; it has standardized items related to software, related to communication, related to many different protocols and aspects. When we put that as our center point, we are in a good position to create solutions that can be very widely used.

Can you expand on the grid modernization ecosystem?

Prithpal Khajuria: The Intel strategy is to make the customer—the utility—part of the journey from day one. Because at the end of the day, the customer has the problems, and they want to buy the solutions for those problems. So we get them engaged, and then we bring in a best-of-the-breed ecosystem with their capabilities in each area. ABB—more than a hundred years of experience in the power industry. Look at VMware—invented virtualization technology. Dell—the lead hardware-solution provider in software components.

And Anthony touched on the fact that we have created two industry alliances focused purely on the power industry: the E4S Alliance, focused on digitalization of secondary substations, where the customers and utilities engage with each other. And the vPAC Alliance, which is focused on virtualization of automation and control in the substations.

So that has been the vision of Intel: Bring everybody together, accelerate the adoption of the technology, and deliver the benefits to the utilities and their customers.

Any final thoughts or key takeaways when it comes to grid modernization?

Jani Valtari: One key message is that technology is ready for very rapid grid modernization, and at ABB we’ll be really happy to engage with our customers on the best way to take them there.

Anthony Sivesind: I’ll echo that: We’re ready now. We have the technology, and VMware is also ready to help utilities in any way that it can to train them and bring their teams together.

Russell Boyer: If we’re going to achieve these ESG targets, we really have to accelerate the deployment of new technology. And Dell is committed to developing the latest technology to make that happen.

Prithpal Khajuria: My message is to the utilities: Let’s put a migration plan together. We can walk you through the journey of a pilot or proof of concept, to a field pilot, to a deployment. That migration plan needs to be stitched together, and Intel and its ecosystem partners are here to help.

Related Content

To learn more about efforts to modernize the grid, read Grid Modernization Powers the Way to a Decarbonized Economy and listen to The Driving Forces Behind Grid Modernization. For the latest innovations from these companies, follow them on LinkedIn at: ABB, Dell Technologies, Intel Internet of Things, and VMware.

 

This article was edited by Erin Noble, copy editor.

Machine Vision Automates Workplace Safety in Manufacturing

Prioritizing workplace safety in manufacturing is top of mind for every manufacturer. But achieving it can be a difficult and costly process. Reducing risk in a factory means constant monitoring to identify environmental safety issues and proper workplace precautions. And this kind of vigilance can be labor-intensive.

“On average, a 10,000-square-meter factory has to have a minimum of 10 safety personnel, with two HSE (Health, Safety, and Environment) officers required for video supervision and monitoring,” says Stephen Li, CEO at Aotu, an AI company providing machine vision-based health and safety solutions.

That represents a significant investment for manufacturers, especially as industrial facilities need round-the-clock supervision. In addition, manual monitoring comes with its own limitations.

“Plant safety personnel work hard—but they’re stretched thin. They can’t detect most health and safety problems immediately, and they’re never going to cover 100% of the scenarios,” says Li. “Plus, there are delays in responding to issues in a timely fashion, since a human being has to actually make a phone call or go to the site of a safety violation in person to observe or correct it.”

It’s a challenging situation for manufacturers, who are committed to worker safety but are also under pressure to tighten budgets and optimize processes. But new machine vision health and safety solutions may provide an answer that keeps factory workers safe and satisfies the demand for greater efficiency.

Machine Vision Automates Safety Monitoring in Bottling Plant

Deployment at a bottling plant in China based on Aotu’s machine vision solution is a case in point.

The plant is operated by a major beverage company. The sheer size of the facility means many different areas to monitor, including rooftops, ceilings, boilers, waste areas, warehousing facilities, and more. Plus, the bustling site is filled with factory workers performing a wide variety of tasks, making supervision of employee behavior a complex undertaking.

In collaboration with Intel, Aotu developed a machine vision-based health and safety solution designed to analyze video feeds from the bottling plant and automatically alert safety personnel when an issue is detected.

“AI can monitor workplace environments in real time, identifying potential hazards and ensuring compliance with safety protocols. This proactive approach to safety can reduce accidents and improve factory workers’ well-being,” says Zhuo Wu, Software Architect at Intel.

The system’s AI algorithms are configured to monitor for environmental safety issues. The deployments cover nearly 1,000 key supervision points within the factories. At the same time, AI also analyzes video feeds for behavior-based safety violations: failure to wear proper protective gear, unsafe climbing and walking, unauthorized access to high-risk areas, violations of maximum occupancy limits, and so on.

If the system detects a problem, it captures a 30-second recording of the safety issue, classifies it as either a major or minor emergency, and sends an alert to a human supervisor for verification and response. If the problem is serious enough, a safety official can remotely trigger an on-site alarm and warning message to alert workers to imminent danger. For less severe issues, safety personnel have the option to follow up later for resolution and worker training.

After implementing the solution, the bottling plant saw an increase in both the efficiency and the efficacy of its safety program. “The use of AI reduced the workload of the plant HSE staff, and it ensured that safety issues were no longer going to be ignored,” says Li. “In addition, safety awareness among front-line workers improved significantly.”

#MachineVision health and safety solutions are gaining traction among large manufacturers—and their adaptability, cost-effectiveness, and ease of deployment should make them attractive to SIs and smaller industrial businesses. Aotu via @insightdottech

Flexible Platform for Video Analytics

For a machine vision solution to be broadly useful to the manufacturing sector, it must be adaptable. A bottling plant, after all, is quite different from an auto parts factory, a high-tech fabrication site, or a chemical facility.

To create a robust yet flexible machine vision platform for industrial health and safety, Aotu decided to partner with Intel. Together, the companies were able to leverage the capabilities of a number of Intel® hardware and software tools:

  • 11th Generation Intel® Core processors offer optimization and acceleration for deep learning, AI, and machine vision scenarios.
  • Intel® Iris® XeGPUs are particularly well-suited to computer vision tasks such as smart video processing.
  • Intel® Xeon® scalable processors enable configurations that require heavier workloads and are also suitable for use in harsher industrial settings due to their ruggedized design and wide operating temperature range.
  • The Intel® OpenVINO toolkit provides pre-trained AI inferencing models and reference models for common industrial safety scenarios—as well as a foundation for the rapid development of custom AI algorithms.

The use of OpenVINO was particularly important when working with AI models for workplace safety in manufacturing scenarios. Acquiring a diverse data set that covers a variety of safety situations often requires extensive efforts—especially when it involves real-world scenarios—and developing these training models can be computationally intensive and time-consuming. Aotu has a set of tools designed to streamline the process of data collection and labeling, and with OpenVINO integration can run optimized pre-trained models, greatly speeding up the data set generation process.

“OpenVINO provides a set of tools and optimizations to enhance the performance of AI models. We use it to reduce the model size and improve inference speed without significant loss in accuracy,” says Li.

Thanks to Intel’s hardware and software capabilities, the company can offer no-code and low-code AI customization and deployment. This enables end users to execute inference tasks across different devices efficiently, maximizing computing power while achieving low latency and high throughput for their solutions.

Towards Safer and More Efficient Industry

Machine vision health and safety solutions are gaining traction among large manufacturers—and their adaptability, cost-effectiveness, and ease of deployment should make them attractive to systems integrators and smaller industrial businesses as well. Especially as more industrial environments start to implement automated solutions such as collaborative robots, industrial AI can be used to ensure AI-driven robots can work alongside humans, reducing sickness and injuries.

But beyond the health and safety benefits, the inherent flexibility of these solutions combined with the power of OpenVINO will open other use cases as well. For instance, the platform can be extended to include defect detection, production line automation, predictive maintenance, and supply chain management.

OpenVINO’s “ability to quickly process and analyze visual data makes it an invaluable tool for enhancing quality control, reducing downtime, and increasing efficiency,” Wu says.

In the future, look for computer vision to further the digital transformation of manufacturing in new and innovative ways, making Industry 4.0 safer, more efficient, and more profitable for all.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

The article was originally published on March 24, 2023.

Edge AI Enables Retail Digital Transformation

There is a paradox at the heart of the digital transformation of retail. On the one hand, AI offers retail businesses some undeniably attractive capabilities. Computer vision product recognition enables self-service checkout, automated restocking, and loss prevention solutions. Behavior recognition allows companies to create personalized shopping experiences for their customers. And behind the scenes, automated data analysis means streamlined operations and better supply chain management.

But on the other hand, many businesses are still wary of AI solutions—even though they recognize the potential benefits.

“There are several reasons why retailers are hesitant to adopt AI solutions, but the biggest factors by far are the lack of in-house technical skills needed to implement them—as well as plain old fear of the unknown,” says Liangyan Li, Head of Global Sales at Hanshow, a solution provider of digital store solutions for the retail sector.

There is justification for such concerns, because implementing AI in a retail setting entails significant technological hurdles. To begin with, it means building a high-performance system that can process vast amounts of data in real time. In addition, there is an innate complexity to retail automation, which usually involves multiple technologies and computing workloads. And finally, there is an element of IT overhead as well: the ongoing need to monitor and maintain a solution after deployment to ensure stability.

The good news for retailers—and for retail systems integrators—is that a new era of ready-to-deploy edge AI solutions has already begun. Built atop next-generation processers, and using software tools designed for edge computing, these solutions offer simple, effective implementations to would-be adopters.

What is the key to building solutions that meet the needs of #retail businesses? The combination of industry-specific #AI know-how and enterprise-tier #technologies designed for ease of deployment and performance at the #edge. @hanshowofficial via @insightdottech

Edge AI Solutions Engineered for Retail

What is the key to building solutions that meet the needs of retail businesses? The combination of industry-specific AI know-how and enterprise-tier technologies designed for ease of deployment and performance at the edge.

Hanshow’s hardware and software technology stack, combined with its experience in developing AI applications for retail, enable a flexible, user-friendly solution—and one that addresses the traditional concerns of business decision-makers in the sector. Here, Li credits Intel with helping to bring Hanshow’s solution to market.

“Intel is unmatched as a platform for stable, reliable edge computing—particularly when attempting to develop a comprehensive, seamless solution for the end user,” says Li.

Hanshow’s solution incorporates a number of different Intel technologies:

  • Intel® Core Processors handle heavy edge workloads and image processing tasks
  • Intel® Media SDK gives developers access to media workflows and video processing technologies—shortening time to market
  • The Intel® OpenVINO Toolkit speeds AI application development and helps optimize visual processing algorithms
  • Microsoft Azure Cognitive Services allows developers to build sophisticated AI algorithms even if they don’t have machine learning experience

On a practical level, Hanshow’s Intel technology-based solutions have the added benefit of being relatively easy to implement in a working environment—and can thus bring about dramatic improvements to operational efficiency in a very short time.

Smarter Shelves from Europe to Japan

Hanshow’s smart shelf management deployments in Europe and Japan are case in point.

Despite the geographical distance, both of Hanshow’s retail customers faced similar challenges: a need to gain greater insight into what was going on in their stores to improve efficiency and boost sales.

The European business, a large supermarket chain with a global footprint, was facing frequent shortages of fresh food in its stores. The main cause of this problem was the inability of employees to identify out-of-stock (OOS) products and take steps to replenish them in a timely fashion.

The Japanese company, a large chain of department stores, was having difficulty identifying the habits and preferences of its shoppers, hampering the business’s marketing efforts.

Hanshow implemented a comprehensive AI solution at both companies. In the supermarkets, it used computer vision cameras to take images of fresh food stacks to provide near real-time data on stock. In the department stores, the company implemented a digital shelf solution that encompassed marketing, OOS management, human-product interaction, customer demand analysis, and smart advertising.

The results were dramatic. The supermarkets saw their average OOS duration drop from 2.5 hours to 1.5 hours—a 40% improvement—while also eliminating the need for employees to perform daily manual inspections. The department store chain, for its part, saw an immediate effect on sales: an increase of nearly 20% in sales of active products when single-product recommendations were implemented in digital shelf areas.

The Transformation of Global Retail

The promise of AI in the retail sector is not new. But the emergence of comprehensive, easy-to-deploy solutions will turn that promise into a reality.

It’s hard to overstate the effect this will have—especially as adoption increases, and systems integrators and technology companies begin to develop the retail AI ecosystem in earnest. Expect to see more complex computing workloads, multi-architecture applications, and new benchmarks for operational efficiency and consumer experience.

This is why Li talks in terms of the “transformation of the global retail market.”

“AI helps retailers provide consumers with more personalized services, accelerates business operations and commodity circulation, and delivers more valuable data insights,” he says. “It will allow retailers to reshape the relationship between people, products, and markets.”

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

AI-Driven Platforms Take the Data Center to the Edge

Data has driven the industry for a long time, but it used to be that heavy-duty analytics happened only in the cloud. Take the example of a manufacturing company. Information from machines was aggregated and processed in the cloud, and next steps were planned in response. The edge was a mere data aggregator, routing data to the cloud to do all the heavy lifting.

But today, moving the data center out to the network edge has come into its own. It’s flexing its muscle, thanks to increased computing power and the ubiquity of IoT implementations. Edge solutions enable near real-time data processing and greater control over essential information, a boon for enterprises. In our manufacturing example, split-second decision-making can enable predictive maintenance of assets in real time.

Edge Computing Orchestration

Even better, moving to the edge does not mean having to say goodbye to the orchestration and management capabilities that the cloud offers. Managing and scaling compute in the cloud is a known entity. Sure, you have tens of thousands of servers running applications, but they’re all in one central location, managed by one team of IT professionals.

“When you’re talking about the edge, you’re still talking about thousands of servers, but unlike the traditional data center, now they’re distributed across hundreds or thousands of physical locations,” points out Jeff Ready, CEO of Scale Computing, a provider of edge computing, virtualization, and hyperconverged solutions.

Maintaining IT staff at each edge location is impractical and expensive, a problem solved by orchestration management software. Scale Computing gives a hassle-free cloud experience at the edge. The Intel® NUC Enterprise Edge Compute Built with Scale Computing replaces the need for distributed, on-premises IT personnel—a small, centralized team can be just as agile.

Edge Computing for Every IT Scenario

A small, centralized team is exactly what Jerry’s Foods, a Minneapolis-based grocer with 40 locations across the country, works with. The retailer has layered many applications, including point-of-sale software, video analytics, and others on its operating system.

Jerry’s edge AI-enabled solutions facilitate impressive personalization and revenue-boosting strategies, adjusting in-store ad delivery depending on the contents of a shopper’s cart. This kind of real-time analytics needs compute to be reliable and always available, which is what Scale Computing ensures.

When one of Jerry’s locations was damaged, the store was no longer accessible to the community. Its IT team was able to gain access to the SC//Platform appliances, and restore all applications and basic store functionality to stand up a temporary store in a tent in the parking lot. This allowed the local community to have continued access to life-sustaining food and beverage. “This is a small team of IT folks managing locations around the country, and they were able to pull it off with the SC//Platform products they had in place,” Ready explains.

Scale Computing works with system integrators and reseller partners as avenues to reach enterprises looking for edge orchestration solutions. These partners can also work with Scale Computing to deliver additional services like migration services and disaster-recovery planning.

“The beauty here is that the selection and #configuration of all #applications can be done centrally and via one portal” – Jeff Ready, @ScaleComputing via @insightdottech

Self-Healing Technology for Turnkey Applications

The need for a self-healing platform became apparent when Ready and his co-founders saw the problems IT routinely faces: the infrastructure works fine on day one, but gets progressively more difficult to troubleshoot as additional components get bolted on over time.

Ready and his team understood that error detection and mitigation needed to be baked into the foundational architecture. And for IT teams, especially those that are remote from physical locations, it helps to have self-healing technology—problems fix themselves automatically.

The HyperCore operating system is installed on the Intel® NUC for a small-footprint, edge computing orchestration and management solution. The OS provides active error detection and mitigation of problems using a technology called Autonomous Infrastructure Management Engine (AIME), an AIOps system based on pattern recognition, looking for patterns that indicate something is broken.

When it locates a problem, SC//HyperCore looks through its Rolodex of problems and associated solutions, and, if it finds a match, executes the corresponding fix automatically. When the system detects a problem that does not exist in its vocabulary, it alerts IT and the problem gets resolved. When the same problem recurs a few times, the fix gets baked into the Scale Computing platform, becoming smarter over time.

SC//HyperCore at each site connects to Scale Computing Fleet Manager, which monitors the health of entire deployments. SC//Fleet Manager also facilitates zero-touch deployments on-site. This means that everything a location needs for edge computing, including vertical-specific apps, gets dispatched automatically from the central portal when the NUC is first plugged in.

“The beauty here is that the selection and configuration of all applications can be done centrally and via one portal,” Ready says. “The solution is scalable, so when enterprises want to expand from 10 to 100 or 1,000 locations, it’s just copy and paste, it’s turnkey, and there’s no change in how you manage it. Automating the fixes and deployments is like having an extra IT person on-site to cover all locations.”

The Future of Distributed Computing

The need for that extra edge will only increase in the future.

Ready reminds us that computing goes through its cycles of centralized and distributed computing periodically. “This isn’t the first time we’ve been here in IT. We’ve gone from centralized computing originally in the mainframe era to distributed computing, client-server-type architectures. That evolved back to centralized data centers and the cloud; now we’re going back to distributed.”

“Edge computing effectively completes the cloud vision,” Ready says. “The cloud was never meant to imply a large data center in Seattle; it meant computing resources, ubiquitously available.” And ensuring that those resources are available when needed and to the extent needed without a heavy lift on IT? That’s where Scale Computing comes in. “It makes the edge behave like the cloud,” he says, allowing for always-available compute power at scale, managed seamlessly.

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

Network Security Gets Next-Gen Performance Without the Cost

We know that 5G networks offer significantly higher throughput, more capacity, and lower latencies than legacy networks. But these benefits come at a price, which for enterprises and cloud-service providers (CSPs) is upgrading existing unified threat management (UTM), firewall, IPSec, and other security infrastructure with a network more capable of monitoring and securing 5G data traffic.

Everyone wants more bandwidth, but higher bandwidth means more data traffic and all that traffic must be secured. To stay competitive, enterprises and CSPs want to offer their own customers improved performance at the same cost, meaning they expect network security specialists to offer higher-performance security at the same price points as current solutions. Security providers have no choice but to pass these requirements onto security appliance vendors who are expected to deliver next-generation performance at previous-generation value.

To address this price-performance pinch, security appliance vendors like CASwell are developing solutions based on 3rd Gen Intel® Xeon® Scalable processors that deliver scalable 100 Gbps Ethernet performance and meet the line-rate security demands of next-generation networks.

Toeing the Line Rate with Reconfigurable Xeon® Appliances

Line-rate security implies that the security infrastructure can inspect streaming data traffic for security threats in real time, without any latency or buffering. At 5G speeds, which can reach 20 Gbps, achieving line-rate packet inspection is much more complex than at lower bandwidths. Some of the challenges include supporting all different types of data and packets so no information is lost mid-transmission and optimizing the underlying hardware platform to maximize throughput regardless of the software or application it’s running.

Because of these requirements, most security appliance designs are “semi-custom,” meaning that there is some level of fine-tuning for every customer. Of course, ODM services required to tweak a hardware platform to specific customer or application requirements aren’t cheap, and the goal is to deliver next-generation performance at previous-generation costs.

One way to accomplish that is by designing modular, reconfigurable systems from the ground up. For example, CASwell has developed the CAR-5060 rackmount appliance based on two 3rd Gen Intel Xeon processors and up to 512 GB of DDR4-3200 ECC memory spread across as many as 16 RDIMMs (Figure 1). The 3rd Gen Xeons onboard the CAR-5060 can each contain up to 36 cores and 72 threads for packet processing and data filtering, while Intel® QuickAssist Technology (Intel® QAT) built into the companion Intel® C627 Chipset offloads cryptographic workloads to improve processor performance as much as 1.5x over previous-generation Xeons.

Image of the CASwell CAR-5060 rackmount application, which features two 3rd Gen Intel® Xeon® Scalable processors
Figure 1. The CASwell CAR-5060’s modular architecture lets network security providers configure the platform with various expansion cards to meet specific use case requirements. (Source: CASwell, Inc.)

But in addition to the Xeon processors, the CAR-5060 architecture contains eight PCIe Gen 4 x8 lanes and one PCIe Gen 4 x16 lane to support different combinations of storage modules, GPU/FPGA acceleration cards, and/or up to eight CASwell network interface cards (NICs) with as many as eight high-speed Ethernet ports each.

In other words, network security providers can configure the scalable 2U system with as many as 64x 10 GbE channels for a total platform bandwidth of 640 Gbps while still taking advantage of commercial off-the-shelf (COTS) pricing.

“A key difference between the CAR-5060 and previous generations is that this model is scalable in terms of the hardware and provides a higher throughput. Network service providers can choose the bandwidth that suits their application,” says Yannic Chou, AVP of Product Management at CASwell. “And they can select other options, such as AI compute capability and storage, as these systems are sometimes used for cloud storage. In addition, they may choose redundant power modules, a common feature.”

“This model is scalable in terms of the #hardware and provides a higher throughput. #Network service providers can choose the bandwidth that suits their #application” – Yannic Chou, CASwell, Inc via @insightdottech

Above the Line Rate with DPDK

Despite the flexibility, scalability, and cost efficiency of platforms like the CAR-5060, application tuning is still required to get the most out of any security appliance. This makes the Intel® Data Plane Development Kit (Intel® DPDK) the next step for network security providers looking to build and implement a next-generation firewall, UTM, IPSec, or other similar security function.

The Intel DPDK is a suite of network and data plane libraries that offload packet processing tasks from an operating system. When DPDK runs on Intel Xeon processors, it is capable of accelerating packet processing by up to 10x and has become all but a de facto part of the development suite for those looking to maximize performance and improve time to market.

This is joined by Intel® Boot Guard, a hardware mechanism in Xeon processors that protects the basic input/output system from unauthorized modifications at boot time to ensure the ground-up integrity of network appliances. In an industry where deployment speed is another top priority, the ability to streamline optimization and security engineering with tools like DPDK and Boot Guard helps OEMs configure platforms like the CAR-5060, port applications to it, and get up and running relatively seamlessly.

Scalable Network Security Solutions: Next-Gen Performance, Previous-Gen Cost

In practice, network service providers need to upgrade their security platforms about every three to five years, at which point many will try to optimize software stacks even further to squeeze every bit of headroom from their hardware appliances. Since there’s usually no way to know exactly what type of performance or functionality will be needed down the road, this has been the best defense CSPs and enterprise IT organizations have had against price-performance obsolescence. When that fails, new appliances are required.

Thanks to its expansion slots and compatibility with a range of network interface modules and adapter cards, upgrading the CAR-5060 is much simpler and more straightforward than in the past. In three-to-five years, customers can simply swap in a new, higher-bandwidth NIC or acceleration card right into the front panel without even opening the chassis.

And that’s how network security providers can beat the price-performance pinch.

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

Private 5G Lowers Mining Costs, Increases Safety

We all know the frustration of getting stuck somewhere with poor cellular coverage. For most people, life without network connectivity is a temporary annoyance. But for the mining industry—typically located across wide geographies in remote areas—it’s a constant and costly limitation.

“In a time of economic uncertainty and volatile commodities markets, capital expenditure is very risky for mining companies,” says Julian Ye, Director of ADLINK Network and Communications Business at ADLINK Technology Inc., a global leader in edge computing. “But at the same time, demand for raw materials is skyrocketing. It’s a classic case of having to do more with less—or risk missing out on a huge opportunity.”

Now private 5G networks, edge AI, and computer vision, which can be deployed in the most remote of locations, offer exciting possibilities.

Private 5G at the Remote Edge

ADLINK’s implementation at a limestone mine in the Emei Mountain region of China’s Sichuan Province is case in point. The mining operation covers a wide geography and each year extracts 4 million tons of the important raw material used in the construction industry, agriculture, metallurgy, and water purification.

The mine’s owners had identified several operational shortcomings.

First, there were outdated systems and equipment that resulted in miscalculations and waste. Of particular concern was the mine’s manually operated weighbridge, an outsized set of scales used by miners to weigh departing vehicles and measure each load of materials exiting the work area. There was also a lack of adequate centralized supervision at the site, making it hard to remedy issues in a timely fashion and ensure worker safety.

Like so many mining businesses, the company was attempting to ramp up production, setting an output target of 8 million tons of limestone annually. The mine’s operators were hoping to do this without expanding beyond existing land or significantly increasing headcount.

However, there were some major obstacles to accomplishing this ambitious goal. As Ye explains, “The mine suffered from poor wireless signal throughout the entire open pit area, and because of the size of the site, laying out a wired network to expand that coverage was a nonstarter.”

Working with ADLINK the mine operator was able to implement a private 5G network together with edge AI-powered solutions to address its business needs.

5G, Edge AI, and Computer Vision: A Winning Combination

First, ADLINK helped the mining company establish a private 5G network. This was done using hardware from ADLINK’s technology partner Innogence Technology, a 5G radio access network (RAN) equipment and services company. Innogence provided a 5G picocell—essentially a miniaturized cellular tower—as well as an industrial gateway and 5G system core to control network functions.

With sitewide 5G coverage in place, ADLINK leveraged its own 5G Small Cell Solution, ruggedized edge computing hardware, as well as its expertise in developing edge AI-based solutions to meet mining business challenges.

A fleet of smart security cameras was installed in order to improve supervision and monitoring. This lets site management keep an eye on operations from a centralized location—allowing them to correct safety violations in real time and reduce the need for in-person supervision throughout the facility. The camera network also supports security automation, monitoring entry and exit points, and perimeter fencing to prevent trespassing.

The combination of 5G, edge AI, and high-performance computing also makes it possible to automate the mine’s weighbridge, which now operates autonomously. Computer vision camera systems identify a specific load by reading a truck’s license plate. Drivers are able to input additional data through a simple on-screen interface. There is minimal risk of error or mismeasurement, and the extracted limestone weighing workflow has been greatly streamlined.

Ye says that ADLINK’s technology partnership with Intel was especially helpful in developing its solution. “Intel provides a lot of help when it comes to developing 5G-enabled solutions, from hardware accelerators to the Intel® FlexRAN SDK. Intel is also, of course, an excellent platform for building stable, high-performance edge computing applications.”

The end result of all this technological collaboration was striking. Overall, the mining company estimates that it was able to increase its weighbridge efficiency by 200%. Modern, centralized, real-time monitoring is now available to site managers and safety officers. And the dedicated 5G network is opening up other opportunities for digital transformation at the site, including industrial as well as back-office AI-based process optimization.

Private #5G networks and #EdgeComputing will help solve many problems for the mining industry in the years to come. @ADLINK_Tech via @insightdottech

The Case for Private 5G Everywhere

Private 5G networks and edge computing will help solve many problems for the mining industry in the years to come. But interestingly, the same qualities that make private 5G so useful in developing solutions for the far-flung edge will also make it an attractive option in manufacturing.

“5G isn’t only about speed. For enterprises, the true benefit is that 5G networks offer large bandwidth, wide connection, and ultra-low latency,” says Ye.

Private 5G networks and edge computing combine the benefits of high speed wireless and AI with the advantages of on-premises technology, resulting in a powerful, secure, and independent platform for digital transformation. ADLINK envisions private 5G use cases that range from smart manufacturing and security to energy production and logistics—essentially anywhere that requires intelligent edge solutions running on a powerful, secure network.

As Ye sees it, this will both transform as well as empower businesses: “In the future, private 5G and edge computing will help all kinds of companies make the move from simple automation to real autonomy.”

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

AI Video Analytics Improves Operational Efficiencies

City governments and private enterprises might not realize it, but they could be sitting on a treasure trove: large volumes of video imagery from closed circuit television (CCTV) cameras. But this amount of data takes intelligence to understand what is happening in the video, according to Saurabh Pachnanda, Product Manager at Vehant Technologies, a company specializing in security-based products.

For example, you can find cameras all over public spaces such as parking lots, apartment building entrances, retail shops, hospitals, and smart cities. And as a result, government and private enterprises are drowning in a glut of raw video footage and data that is useless unless they can derive valuable insights from it.

The Growth of Video Analytics

Case in point, a camera might merely spot dozens of cars turning around on the highway, but it takes intelligence to understand what could potentially indicate an obstruction, Pachnanda explains. Sure, human operators can sift through footage, but finding a few seconds’ worth of useful information in hours’ and hours’ worth of video footage is like looking for a needle in a haystack. It’s grossly inefficient and expensive to deploy human labor to the task.

Luckily, advances in machine learning computer vision algorithms deliver a more sharply honed ability to derive intelligence from imagery (Figure 1). Pachnanda has noticed that these, combined with increased computing power at the edge, accelerate demand for actionable video analytics. “There’s an increased need to secure some of these technologies and bring productivity even higher,” he says. “Computer vision machine learning algorithms help deliver intelligence so users can understand what’s happening. Instead of having someone focus on the raw video data all the time, we can get their attention on specific events or specific insights.”

Vehant uses machine learning and AI algorithms to provide valuable insights to businesses and operators.
Figure 1. Vehant uses machine learning and AI algorithms to provide valuable insights to businesses and operators. (Source: Vehant)

The Many Use Cases for Video Analytics

AI and machine learning can also accept inputs from other sensors and find larger trends and correlations in data, which humans might otherwise miss. When a hospital in India struggled with inefficient use of its parking lots by medical workers, it merged an existing decal-based system with automatic number plate recognition that used ML algorithms. Layering such intelligence on top of existing methods helped hospital management better match staff with their vehicles to ensure employees followed parking limits and did not overstay their allotted times.

And this is just one example of the importance of video analytics today. Vehant’s AI video analytics help in three broad verticals: smart cities, enterprise analytics, and video incident detection.

“#ComputerVision #MachineLearning algorithms helps deliver intelligence so users can understand what’s happening” – Saurabh Pachnada, @VehantTechnolo1 via @insightdottech

Vehant has a bank of pre-trained models for specific use cases to help customers leverage existing work, instead of having to reinvent the wheel. This allows Vehant to fine-tune the models for various needs so customers can minimize deployment time. The company uses its off-the-shelf packages as a baseline for further on-site configuration and customization.

Depending on the use case, the company can alert customers of any findings through a mobile app, text, or email. Vehant also provides a web interface to access all insights through a single pane of glass. Notifications include rich metadata that gives details about location, time, what sort of incident was captured, along with a few relevant seconds of video stream, Pachnanda explains.

Infrastructure Needed for Video Analytics

Knowing that customers are wary of rip-and-replace solutions, Vehant takes existing infrastructure, cameras, and related systems into account when designing custom solutions.

Besides cameras, customers need computing power that varies depending on the volume of the data processed. “When we start to go beyond a certain point, this is where GPUs (graphical processing units) come in handy because they can process that video volume very efficiently and that helps with infrastructure load,” Pachnanda explains. Leveraging different hardware technology like CPUs, GPUs and AI accelerators, Vehant can cater to customers’ needs no matter the amount of data. These technologies help fast-track inference at the edge and speed up delivery of insights. Vehant’s software scales up and down to accommodate the amount of data flowing in.

Vehant is particular about making the least possible impact on existing hardware infrastructures. “These are very generic computes that can be used off the shelf; there is nothing very specific that can’t be used for something else,” Pachnanda says. “That’s a cost-saving measure we keep in mind.” Customers can choose to start small and scale gradually, he adds.

The Future of Video Analytics

Pachnanda says Vehant pays particular attention to privacy of data. For instance, data is processed for added security. “There are a fair number of checks and balances within the system to ensure data is not unnecessarily exposed, captured, or used,” Pachnanda says. Vehant does not keep access rights to any data captured at the customer sites whether at the edge or at a remote site.

Vehant uses Intel® processors and chipsets, and the OpenVINO toolkit for AI edge inference. “Intel works with us on our solutions very closely. They help us on the solution architectures, the final deployment, and they sometimes even help us offer an end-to-end cohesive solution for our users,” Pachnanda says.

He adds that this is just the cusp of an explosion in use cases for video analytics. “We see a lot of customers who want to understand insights from video data and are doing limited trials and experimenting with the possibilities,” Pachnanda says. “Because of the advancements on the algorithm side, we’ll see more and more industries adopt AI video analytics with open arms.”

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

ISC West 2023: Empowering Real-Time Intelligence at the Edge

Computer vision and intelligent video solutions now have the potential to revolutionize industries, improve operations, and enhance the quality of life for citizens. And with recent advancements in artificial intelligence and machine learning, it’s easier and more accessible for businesses to unleash these solutions and compete in today’s fast-paced, data-driven world.

But developing and deploying computer vision and intelligent video applications at the edge requires strategic partnerships within the ecosystem. Intel systems integrators and IoT solution aggregators, for example, provide critical expertise and technologies necessary for success. You can see this for yourself at the International Security Conference & Exposition, ISC West, taking place March 28 to 31 at the Venetian Expo in Las Vegas. Dozens of Intel® Partners Alliance members will showcase the value of intelligent video at the edge and the partnerships making it happen.

For example:

  • In retail, computer vision is used to track inventory, analyze customer behavior and preferences, and improve the overall shopping experience.
  • In manufacturing, video analytics can be used to stop defects from making it into production and to improve worker safety.
  • And in smart cities, technology and partnerships power autonomous vehicles, traffic analysis, and vehicle predictive maintenance.

Systems Integrators Make Intelligent Video at the Edge Possible

What all these use cases have in common is that they generate massive amounts of data, and if your solution is not set up properly to handle this, it can result in poor quality or loss of data. This is extremely troublesome for mission-critical applications where you need accurate and real-time intelligence and insight into operations on the fly.

At ISC West, Seneca, an Arrow company focused on computer technology for video applications, will showcase its video compression solution designed to handle massive amounts of data while minimizing storage and bandwidth requirements. The Seneca xCompress Video Stream Optimizer can provide up to 90% compression per stream, enabling higher-resolution video without compromising quality or storage. It features Intel® Core i3, i5, or i7 processors with support for up to 16 cameras.

“At #ISCWest, we’re discussing this transformation and how Intel is creating an ecosystem to provide customized #edge and #AI solutions” – Kasia Hanson, @intel via @insightdottech

Beyond mission-critical applications, Seneca also will showcase a new solution for small retail and convenience stores looking for an easy-to-install, cost-effective hardware solution for video monitoring. The new POE NVR is a simple out-of-the-box setup utility designed to reduce labor costs and provide a POE management interface. With 8, 16, and 24 ports, no configuration, and optimal power resources, users can easily connect to the cameras they need to get deeper insights into their business.

Elsewhere on the show floor, Wesco, a security distributor that works with Intel to deliver digital technology solutions from edge to cloud, will demonstrate how it can solve businesses’ pressing security challenges with physical security solutions and consultancy services. The company offers intelligent and integrated solution services to provide access control, intrusion detection, storefront protection, and operation improvements.

For example, Wesco recently used its perimeter protection, energy solutions, and video intelligence to safeguard the infrastructure of a parking facility while also acquiring insights into parking allocation, occupancy, and management. The energy solutions served to minimize the parking facility’s environmental impact by optimizing lighting in parking spaces to enhance safety and security.

TD SYNNEX, a leading distributor and IoT solution aggregator for the IT ecosystem, will demonstrate how it assists businesses in safeguarding their physical assets. The company’s VisualSolv offering encompasses audiovisual, information technology, and consumer electronics technologies to help build the cross-functional solutions partners need. The solution includes access control features for improved security, IP cameras for data capture and analysis, and networking capabilities that ensure efficient and secure data transmission and storage.

Visit these partners at ISC West to see how they provide the expertise, customization, integration, support, and compliance necessary for a comprehensive and reliable, intelligent video solution. While you’re at it, don’t forget to explore the vast array of other Intel partners at the event, including Genetec, which enhances the realms of physical security and public safety with its software, hardware, and cloud-based services. Additionally, Axis Communications will showcase its extensive range of network, video, audio, access control, and analytics solutions.

And that’s not all. There will be many other Intel partners that transform industries with their video intelligence solutions, including Megh ComputingEpic IOWait TimeNTT, and Paravision.

“The security market is currently experiencing an exciting phase as the shift from analog to digital has opened up new possibilities with AI and edge technologies,” says Kasia Hanson, Global Director of Security Sales at Intel. “At ISC West, we’re discussing this transformation and how Intel is creating an ecosystem to provide customized edge and AI solutions, complete with built-in security features for enhanced physical security deployments. Our vision at Intel is to empower the industry with our range of AI and edge-based technologies, enabling our partners to deliver cutting-edge and innovative solutions.”

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

AI Neural Networks Boost Industrial Safety

Workplace safety is an operational, financial, and moral imperative. But industrial accidents are distressingly common. And unfortunately, many plant managers shy away from safety technology due to concerns over productivity.

Thankfully, emerging technologies like AI neural networks and computer vision will enable industrial safety solutions that protect workers without negatively impacting production.

“It’s a game changer, because management no longer has to choose between safety and profitability,” says Jose Nogueira, Chief Executive Officer of Xesol Innovation, a company that specializes in application of AI neural networks to industrial safety. “In fact, safety solutions based on AI offer important IIoT benefits when they are adopted.”

Protecting People, Machines, and Productivity

A good example is application of AI neural networks and computer vision to improve industrial safety systems for forklifts.

Forklifts are indispensable industrial vehicles—but they are also a major safety concern. Collisions with both people and stationary objects are commonplace, causing damage to equipment, injuries, and even deaths. In the US alone, forklifts were involved in more than 7,000 serious accidents in 2020.

Collision detection systems based on radio-frequency identification (RFID), radar, and standard cameras have proven inadequate to the task of delivering workplace safety. They also produce frequent false warnings that harm productivity.

“The main drawback of the older-style systems is the inaccuracy, and also the constant alarms,” says Nogueira. “And many newer solutions still can’t identify a person in anything other than an upright posture—for example, if they’re lying down because they’re injured, or if they’re crouching while they work.”

Using its experience in AI neural networks, Xesol developed Drivox, an intelligent collision warning system that tackles the problem of industrial safety in a very different way.

The solution uses front- and rear-facing computer vision cameras to acquire detailed imagery of the vehicle’s environment. It scans for dangers in real time, and is trained to detect the human form in any position as well as environmental hazards. If a risk is detected, the driver receives an audio and visual alert on their display. But otherwise, they’re free to proceed with their work—without being interrupted by false alarms caused by mere proximity to a person or another machine.

#AI-based safety systems also lend themselves to #IIoT applications organically—offering significant advantages to industrial end users, AI experts, and the #SystemsIntegrators (SIs) who serve them. @xesolinnovation via @insightdottech

Case Study: From Safety to IIoT

The obvious benefit of an accurate collision detection system is improved workplace safety. But as Nogueira’s company discovered, AI-based safety systems also lend themselves to IIoT applications organically—offering significant advantages to industrial end users, AI experts, and the systems integrators (SIs) who serve them.

When implementing Drivox for a large manufacturer, Xesol was asked by their customer if they could add a number of additional features to the solution:

  • A digital safety checklist to determine the operational state of each vehicle, and the ability to automatically lock vehicles that fail the checklist.
  • A GDPR-compliant Biometric ID system to handle machine startup instead of old-style magnetic cards.
  • Insights into the performance and safety practices of each driver.
  • A unified reporting and data visualization dashboard to manage a fleet of vehicles distributed across multiple partner companies.

Xesol integrated these features into their technology roadmap, fulfilling each of the end user’s requests. The result was a true win-win scenario: an extremely satisfied customer, and a reimagined solution for Xesol.

Takeaways for AI Specialists and SIs

In essence, Drivox has evolved from a safety device to a comprehensive IIoT service platform. The range of capabilities is extensive, says Nogueira: “What began as a next-generation collision detection system can now be used for fleet management, equipment inspection and reporting, to ensure that only authorized personnel operate vehicles, and for fleet maintenance planning.”

For SIs and other AI specialists, the lesson in Xesol’s growth is how a comprehensive solution can be developed by leveraging the resources of today’s AI-powered product ecosystem. In this respect, the company’s partnership with Intel was essential, says Nogueira:

“The neural network optimization tools in the Intel® OpenVINO Toolkit made Intel a natural fit for our company. But more importantly, Intel represents a huge technology ecosystem, with thousands of suppliers, assemblers, and market-ready solutions. That offers tremendous advantages in terms of speeding up product launch time.”

The Future of Industrial Safety

The significance of solutions that offer both industrial safety and other IIoT benefits goes well beyond scalability. Such solutions are also inherently versatile, meaning that they find natural use cases outside of industrial settings as well.

Drivox, for example, has already had orders from the shipping industry for use on container ships, and also from civil engineering companies that want to bring safety and IIoT benefits to excavators and road rollers. And Nogueira says he sees even more diverse applications for the solution on the horizon: “I think airport and agricultural machinery are coming soon. We’re also looking at entering the autonomous guided vehicle (AGV) and security sector.”

Longer-term, AI-enabled IIoT platforms will help unify industrial safety, fleet management, logistics, and production functions in a single system.

“IIoT and AI are helping industry move toward a truly integrated factory environment management solution,” says Nogueira. “If you’ll allow me the expression, this technology is finally bringing the dream of ‘smart factories’ closer to reality.”

 

Edited by Georganne Benesch, Associate Editorial Director for insight.tech.

The Driving Forces Behind Grid Modernization

Electric utility companies face multiple challenges in maintaining a reliable power supply. With extreme weather events, the impact of climate change, and an increasing global demand for electricity, they need to keep the lights on while also focusing on sustainability, energy efficiency, and decarbonization. These obstacles require electric utilities to rethink how they design, manage, and maintain the power grid to ensure its resilience, reliability, and affordability for the future.

In this IoT Chat episode, we hear from industry experts and thought leaders at the forefront of this transformation. They discuss the latest innovations in grid modernization, including use of artificial intelligence, machine learning, and blockchain technology, and how these solutions help make the grid more resilient and secure.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Google Podcasts      Amazon Music

Our Guests: ABB, Dell, Intel, and VMware

Joining us for this conversation:

Podcast Topics

Jani, Russell, Prithpal, and Anthony answer our questions about:

  • (3:17) Power grids’ recent evolutions and changes
  • (4:58) State of the power grid today
  • (7:26) Ongoing efforts to modernize the power grid
  • (8:39) How to scale and measure the success of grid modernization
  • (9:55) The biggest opportunities for changes within the grid
  • (11:35) Emerging technologies to improve the power grid
  • (15:07) Real-world examples of grid modernization across the globe
  • (22:36) Importance of industry partnerships and standards
  • (29:38) The future of grid modernization

Related Content

To learn more about efforts to modernize the grid, read Grid Modernization Powers the Way to a Decarbonized Economy and listen to ABB Talks Smart Grids, Substations, and Security. For the latest innovations from these companies, follow them on LinkedIn: ABBDell TechnologiesIntel Internet of Things, and VMware.

Transcript

Christina Cardoza: Hello, and welcome to the IoT Chat, where we explore the latest developments in the Internet of Things. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re going to be talking about the driving forces behind grid modernization with a panel of expert guests from Intel, VMware, ABB, and Dell. But before we jump into our conversation, let’s get to know our guests. Prithpal from Intel. I’ll start with you. Please tell us more about yourself and your role at Intel.

Prithpal Khajuria: Oh, thank you, Christina. Prithpal Khajuria: I lead the Energy vertical at Intel. And Intel is at the forefront of driving grid modernization worldwide to meet the energy needs of the global customers.

Christina Cardoza: Great. Looking forward to hearing more about what Intel is doing in this space. But, Russell, welcome to the show. Can you tell us more about yourself and Dell?

Russell Boyer: Yeah, my name is Russell Boyer. I work for Dell Technologies. I’m part of the Global Energy team. My role is to really develop and drive the solutions and strategies for helping energy transition, advancing decarbonization, and ensuring energy security. So, thank you for having me.

Christina Cardoza: Yeah, thanks for being here. And, Jani, also thank you for being here. Please tell us more about yourself and ABB.

Jani Valtari: Thank you for the opportunity to join this very nice webinar. My name is Jani Valtari. I come from ABB Distribution Solutions. I’m acting as a Technology Center Manager, which means that I’m in charge of research and development activities we do around electricity-distribution systems. So, our aim at ABB is to make electricity distribution as reliable, as smooth as possible, and really boost up the electrification of our society and reducing the carbon footprint.

Christina Cardoza: And, last but not least, we have Anthony from VMware. Please tell us more about yourself and the company.

Anthony Sivesind: Thanks, Christina. So, yeah, my name is Anthony Sivesind, and I am a Solutions Architect at VMware, leading our Edge Utility vertical. I came here after working as a utility engineer for 16 years. I spent the majority of that time in protection-automation control, working on standards and strategy. And, you know, now working for VMware I have a great opportunity to not only advance my own learning and networking virtualization and modern applications, but can begin to pay forward that experience bringing together OT and IT technologies. So, our goal is to ensure proper implementation of new solutions being introduced to the power industry, and to help support them from the conceptual phase all the way to in service.

Christina Cardoza: Great. Well, can’t wait to hear from all of you about what’s happening in the grid space, and how it is being modernized and evolving throughout the years. I think recently there has just been an increased demand for electricity, ensuring that power is reliable, stable, affordable; but what many users don’t realize is all of the work that has to go on the back end to make this all possible.

So I would love to start off the conversation just looking at the state of the grid and how it has had to evolve and modernize over the last couple of years. So, Prithpal, I’ll start with you on this one. If you could talk about the recent evolutions and changes you’ve seen as it relates to the power grid.

Prithpal Khajuria: Oh, thank you, Christina. If we see the lay of the land, the grid architecture has been almost a hundred-plus years old. It has not changed in a hundred years, but what happened in the last decade, we started shifting towards the renewables, and the most important thing is the penetration of renewables at the edge of the grid. I mean, in other words, homes, businesses, parking lots—where we started deploying large-scale renewable energy, mostly the solar. And what it did was that it started pushing energy back to the grid.

So, grid was designed as a one-way highway of electrons moving from utilities to homes and businesses. But with the addition of renewables at the edge of the grid it started the two-way flow of electrons. Now we are facing the challenge. A system was designed to operate one way, but we have to make, adapt it to the new scenario, where the renewable energy is coming from homes and businesses back to the grid. That led to the biggest challenge in the power grid. And I think that requires us to rethink the architecture of the grid; how we can add more intelligence technology into it to get better visibility and faster decision-making capabilities going forward.

Christina Cardoza: Absolutely. And, Anthony, I’m wondering, from a VMware perspective, the evolutions or changes that you’re seeing, and where we are today with that evolution.

Anthony Sivesind: I’ll add on what Prithpal said there, and I agree that the power flow has changed. What once was a one-way power flow is now a great shift and a lot of additional disaggregated sources on the grid. So, what wasn’t a problem for utilities, now is. And along with the power flow we’re seeing an increase in the penetration of, basically, point loads—increase in density of loads, and that really is due to data centers and electric vehicles that we’re seeing. Those are two examples.

And so balancing those changes, along with an increase in extreme weather events, physical cyberattacks, and doing that all while maintaining their aging grid infrastructure is a challenge for utilities. So what VMware wants to do is help to implement a flexible platform for those utilities to use to improve their capabilities.

Christina Cardoza: I can definitely see how the aging infrastructure, like Prithpal was mentioning, and the new demand with data centers and just the rise of electric vehicles is putting pressure on the grid and sort of driving these changes. But I think there’s so much more that is not forcing, but driving these changes, and creating businesses and utilities to really think about how they are approaching the grid. And so, Russell, I would love to hear what you think some of these additional evolution drivers are.

Russell Boyer: Sure. So, we’ve all experienced that power is critical for our modern civilization. You know, living here in Texas, we’ve lived through a few recent disasters, and what you realize is that all of our technology relies on power. And so what the utility has to do is basically figure out how do we take these various challenges, like weather events and cyberattacks and all of those, and basically add more intelligence and add more operational capabilities to turn that data into insight, and ultimately to improve the reliability and the resiliency of the grid.

Christina Cardoza: So I think it’s clear that changes definitely need to happen, and changes are already underway. So, Jani, I would love to hear from you: what are the current efforts you see out there to modernize the grid, and how else will these efforts need to build on and scale?

Jani Valtari: I think utilities today, they are facing a tricky challenge. At one site we need to increase the amount of renewable energy; we need to decarbonize the energy sector. But then at the same time, bigger parts of the society are going under electrical energy. So we actually are even more stringent on reliability requirements.

So we need to be at the same time very flexible, very adaptable to a renewable generation, but we also need to be more secure than before. And the way to do that is to add more digital technologies. And to do that in an affordable way we need specifically these kinds of common, standardized platforms—what Russell was talking about—to really make this transition in a way that we make scalable solutions that can be widely deployed to many different locations and across the globe, regardless of the country or our industry.

Christina Cardoza: Yeah, absolutely, Jani. And you mentioned these ideas—reaching sustainability goals—that’s something that Russell mentioned as well. And so I’m wondering, as we try to reach these goals, as we try to modernize the grid and keep it—the power grid—reliable and sustainable, like Anthony mentioned, how do we measure success? So, Anthony, if you’d like to take that one.

Anthony Sivesind: Yeah, thanks. So, we’re seeing grid modernization happen, most commonly we see it at the grid center, right? We’ve got advanced management systems often for transmission distribution. They offer significant improvements in the power flows that we talked about at the beginning. And they offer other benefits too: of course reduced average time, business continuity, overall power quality is improved. They’re needed along with those edge platforms that both Jani and Russell talked about to improve your visibility and your intelligence and the data flow. How are we going to measure that?

So, I think energy companies will go back to their roots. How do they measure success today? It’s by quantifying the safety and reliability with metrics, and then they can also look hard at the value they’re providing. So not only is that levelized cost of energy, but that’s the information and services at a higher quality they’re providing to their customers than they ever have before.

Christina Cardoza: And I’m wondering, also from an Intel perspective, Prithpal, how—where you guys see the biggest opportunities for these changes. Where can we start making grid-modernization efforts? But then how do you take those starting efforts and scale and build off of them?

Prithpal Khajuria: Yeah. I think, Christina, one thing to look at is how do we build a data-driven grid? What historically we have been doing building it is a model-driven grid, and from top down. But, now we need to go bottom up, using a data driven, by building intelligent systems at the edge of the grid—in this case is the substation. So, how do we build the intelligent edge and use that intelligent edge to collect more data, normalize the data, extract more intelligence, autofeed, for greater visibility, and faster decision-making?

I think that is how we can address the challenges, such as in order to meet the ESG goals that Anthony and Russ mentioned, is to maximize the utilization of renewable energy. The only way we can maximize the utilization of renewable energy is if we have a greater visibility and insights, and that’s how Intel sees is to build a data-driven grid going forward.

Christina Cardoza: I’d like to take a minute to step back and look at some of these emerging technologies and trends that are happening within the grid that we’ve mentioned—the intelligent edge, renewable energy, AI is a big component in this. So, Russell, I’m interested in hearing from you how these technologies are being used to improve the grid, and the importance of them in this whole grid-modernization initiative.

Russell Boyer: Well, Dell technology has been investing in edge and IoT for several years now, to harden our overall compute infrastructure and to be able to ultimately offer more capabilities out at the edge. In order to support all of this automation and real-time operational decision-making, we really need more capabilities, more compute, out at the edge in the substation, and that’s just to be able to meet the requirements today. If you look at these sustainability targets, we’re going to have to be able to have a landing place where these new AI models of the future can land. And today we’ve got aging infrastructure in the substation, and we really need to modernize that, and modernize it at scale so that we can not only meet those requirements of today, but the requirements of the future.

I think the one thing, example, that was given earlier is that as we start having more and more virtual power plants where there’s a significant amount of generation on the distribution side, we’re going to need to be able to improve those operational technologies to be able to better manage that, and to achieve those ESG targets that Prithpal mentioned to make sure that we favor those sustainable sources of energy.

Christina Cardoza: I love how you talked about the requirements of today, but also meeting the requirements of tomorrow. Because I think a lot of the goals or the efforts in place are going to take years to reach, and some of the sustainability goals are decades out there. So, Jani, I’m wondering from you, how else do you see these emerging technologies being used to meet the needs of today, but also be able to meet the requirements of tomorrow and the future?

Jani Valtari: If we look a few years back, the traditional way of handling protection control in substation has been to use devices that you install once, and then you let them run for maybe 10, 15 years and you don’t need to touch them so much. And now we see changes happening where we actually need to adopt a changing environment on a very frequent scale.

So it means that we are not anymore designing based on models like Prithpal was saying, but we need to make it more data driven, not just that we can collect data and get some insights, but we can actually react fast based on the data even in the millisecond scale, and really keep the reliability of the network as high as possible.

And for emerging technologies, what we have now noticed—one very interesting thing is, for example, virtualization of real-time functionality. Not anymore going to dedicated devices that you engineer for certain purpose, but you really take a software-oriented approach, and even a very critical, time-critical protection counter-functionality. You can run things on the virtual platform and really quickly adapt and change whenever there’s a need to change in the network.

Christina Cardoza: Absolutely. And so we’ve been talking about these grid-modernization efforts at a high level, but I would love to hear from each of you—because obviously you all are significant players in this space to actually making this happen, helping utilities and businesses and organizations meet their goals and really modernize their efforts in the power grid. So, Anthony, I would love to start with you. Do you have any case studies or customer examples you can share of what VMware has been doing in this space?

Anthony Sivesind: I’m going to steal the UK Power Network’s example, which I think we’ve all worked on. They have a very public project they call Constellation, which is they’re in the process of virtualizing all their substation applications. From that, they expect not only to increase and enable the capacity of renewables on their system—which is going to offset the carbon emissions—but they also plan to save their customers money in the process.

So, as they install and commission those systems, they realize they have a flexible platform. So they have an open call for innovation in competition. So, that’s impressive, what they’ve done there so far, and really they’re opening the floodgates on what can be done with the data that they’re going to be able to leverage now, and really they’re just scratching at the surface of what’s possible. So, exciting.

Christina Cardoza: Absolutely. And, Prithpal, what is Intel doing in this space? Or what can we expect from Intel in the future as we continue these grid-modernization efforts?

Prithpal Khajuria: Intel is looking at grid modernization from multiple angles. One angle is first to talk to the end customers, in this case the utilities. What are the challenges they are facing, and how technology can help them. One of the biggest challenges which we saw, which Jani touched a little bit on it, is the penetration of these thousands of fixed-feature function devices, which have been sitting in their substations for many years, and they were designed to do one thing and only one thing. So that was the biggest challenge for the utilities. So Intel put together a team to build the next-generation infrastructure—just like what data centers did, what telcos did—to standardize the hardware, disconnect software from the hardware, because Intel guarantees the backward compatibility with our silicon. In that way we can accelerate the adoption of technology.

So, how this whole thing happens, I think my colleagues will add—Russ and Anthony and Jani will add more into it. Intel provides the core technology, the ingredients, okay? Which is our silicon and associated technologies around it. Then Dell comes with its technologies; its capabilities layer on the top of it. Then VMware comes with its software-defined infrastructure on the top of it, and then ABB comes with the power-centric technologies on the top of it. That is what kind of Intel vision was, that how to bring the whole ecosystem, build this scalable infrastructure, which can accelerate the adoption of technologies in the utility sector to drive the goals which each utility or each country in the world has on maximizing the utilization of renewables and minimizing the fossil fuels.

Christina Cardoza: Great. And Jani, last time you joined us on the IoT Chat was a little bit over a year ago, where you talked about how ABB was approaching this idea of smart grid, and doing things like modernizing substations. So I would love to hear an update of what you guys are doing today, and how you’ve helped customers in this space.

Jani Valtari: Yes, thank you. I believe one year ago we were talking about certain visions, and today we can say that it’s now reality, not anymore vision. In general, we’ve been looking towards, let’s say, software-oriented approach to create management for already two decades. So, trying to really shift things from hardware-centric to software-centric, and going from model-based towards database, and then really going from fixed systems to very volatile and fast-changing, but still super reliable systems.

And the latest addition to this long chain of many innovations—how about one month ago when we released the world’s first virtualized protection and control system. And the ABB key knowhow here is of course the multidecade long experience on protection and control on power system algorithms and power flows and different kind of fault phenomena. But we even, we cannot do this whole thing alone. So it’s been very, very good to have very good, solid collaboration.

For example, in the level of hardware development with Intel and Dell, we need really super-reliable hardware also to run the algorithms. So, and then also we are not experts on the virtualization environments, and that the good, solid collaboration with Anthony and VMware has been also very, very important for us. And in addition to this product release one month ago, Anthony already stole a nice example with a good collaboration with UK Power Networks in the Constellation project. We are where we are now, really bringing this solution to the wide deployment.

Christina Cardoza: I love hearing about all of these collaborations that you guys are working with together. But before we get into that, Russell, you mentioned a little bit of what Dell is doing in this space and I would love to hear you expand a little bit.

Russell Boyer: Sure. So, I like to use the term that we’ve got to create a coalition of the willing in order to innovate. And so Intel’s done a great job of bringing together a coalition of various software, hardware, and clients to really go about putting together a standard. You know, we’ve got to influence the standards. For example, virtualization was mentioned, and virtualization has had tremendous value and benefit on the data center side, and it’s just now coming to the edge. And so we’ve got to influence that particular standard.

The other is we’ve got to have the collaboration. So, we’ve had some close collaboration with ABB, with their SSC600 software running on the Dell XR12 at UKPN, which was mentioned. The key here is Dell has continued to make investments in our platforms, and making sure that it can meet standards like IEC 61850. I think the other key is, as we move forward, we want to make sure that we have a whole portfolio of options to be able to support these modern platforms at the edge.

And one other item I just wanted to say is that this collaboration we have to have, close collaboration with all different types of partners, so we are open, too, if there’s additional folks that want to innovate with us and want to work together on these kinds of strategic objectives, let’s talk, because I think it’s really about the collaboration that’s going to make this particular project successful in the future.

Christina Cardoza: Absolutely. And, you know, I think the old way of thinking is sort of, you have to do everything on your own and build everything from the ground up, but when you have partners like the ones that we have on this webinar today, you don’t have to reinvent the wheel. And since Jani mentioned how he was working with VMware, Anthony, I would love to hear how else you guys work with others in the industry like ABB and VMware and Dell, and what really is the value of those partnerships and those coalitions, like Russell mentioned?

Anthony Sivesind: Yeah, it’s been invaluable, really, the partnerships we’ve established and the collaboration, not just with the partners like you’d see here, but also utilities. I want to tip my hat to Intel for engaging all the utilities. You know, a lot of them don’t work—at least where it’s still deregulated—as competitors, so they can come together and work together, and Intel has kind of spurred the industry with a pair of coalitions or alliances, in E4S in Europe and AMEA, as well as the vPAC Alliance in here in America. And it’s really a great chance to build those standard specifications that Russell talked about, and collaborate with utilities and the partners like you would see here. So that’s really—it’s been a driving force, I think, in the industry, and will continue to be, and help us accelerate what we’ve been talking about here today.

Christina Cardoza: And I think one of the great things of working and partnering with a technology giant like Intel is that Intel brings their own coalitions or ecosystem of partners to really get this done. And it’s not just working with partners, it’s working with systems integrators, system architects to make sure that every piece of this is covered. So, Prithpal, would you like to talk a little bit more about the ecosystem that goes on at Intel?

Prithpal Khajuria: I think everybody has touched on it, but the Intel strategy right from the beginning has been that first make the customer first. The utility—make them the first, and make them part of this journey from day one, because at the end of the day they have the problems, and they want to buy the solutions for those problems. So we need to get them on the forefront, fully engaged, and then bring the ecosystem together, the best-of-the-breed ecosystem out there with their capabilities in each area.

If we talk about ABB, best of the breed, more than a hundred years of experience in the power industry. Look at VMware, invented the virtualization technology. Dell, the leader hardware-solution provider in software components. So we get all these best-of-the-breed ecosystem together to create best-of-the-breed solutions. And that’s what the objective of Intel has been.

And I think Anthony touched on it on two things. We created two industry alliances focused purely on the power industry. One was the E4S Alliance, which we started in Europe—everybody’s a member of it—which is focused on digitalization of secondary substations because they are also at the edge of the grid. That’s where the customers and utilities engage with each other. And then we came to North America where we saw a bigger challenge in primary substations, and we created a vPAC Alliance, which is focused on virtualization of automation and control in the substations.

Then it goes back to what—and Russell mentioned to build that scalable, standardized infrastructure—and once we do that, then we can land the applications on the top of it. Today’s applications and the applications which we have not thought about yet! But the infrastructure is there now, and then things can be added as we go. So that has been the vision of Intel: to bring everybody together, accelerate the adoption of the technology, and deliver the benefits to the utilities and their customers.

Christina Cardoza: So, one thing I’m curious about in hearing all of this is we’ve talked a lot about new and emerging technologies in this conversation, as well as just new partnerships happening, and I’m curious—to really make these coalitions or collaborations work, it seems that everybody needs to be speaking the same language. And I know that can be difficult at times, when you have a number of different standards or technologies that everybody is working on.

So, Jani, I’m wondering if you can talk a little bit about how the importance of industry standards, the work or the standards out there, and really making sure that everybody is on the same page to making these big efforts successful.

Jani Valtari: I would say, first of all, we need to have a common vision, which we now have. What we want to see happening in terms of grid modernization, we need to have a lot of customers on board, and the customer actually is the first—first partner to say what they want to achieve, but how to bring to the—go to the direction that the solution is scalable and can be widely used in different places.

Then we need to do everything based on global standards. In the power sector, the key standard is IEC 61850. And when we all think and agree that we want to follow that standard, that already helps a lot. It has standardized items related to hardware, it has standardized items related to software, related to communication, related to many different protocols and aspects. So I would say that already this one particular standard with many different subsections; when we put that as our key center point, we are in a good position to create solutions that can be very widely used.

Christina Cardoza: Great. I think one thing is clear from this conversation is that we’ve only just scratched the surface of what’s possible and what’s still to come. So, Russell, I’m wondering what do you envision for the future of grid modernization, or what is next on this timeline effort?

Russell Boyer: From a grid-modernization perspective, one of the key things—we’ve got to put the customers first and we need to educate them. We need to educate them on the new technology. We need to invest in making sure that we can test the technology and prove out how this works to get the substation engineers, to get all of the technologists comfortable with the new platform.

We set a goal back in March of 2021 of getting 20 pilots in the first year, and we’ve achieved that. And I think that’s important because we’ve got to get, we’ve got to accelerate the deployment of this new technology in order to achieve these energy transitions. And so I think it’s critical that we—any opportunity that we can educate and test and then ultimately pilot this technology will help to meet these particular energy-transition goals.

Christina Cardoza: Great. Well, unfortunately we are running out of time, but since this has just been such a big conversation, we’ve touched upon a bunch of different things and it’s such an important topic. I would love to just throw it back to each of you one more time for any final thoughts or key takeaways you want to leave our listeners with today. So, Jani, I’ll start with you on this one.

Jani Valtari: Well, maybe one key message is that technology is ready for very rapid, fast, grid modernization, and we’ll be really happy to engage with our customers and to really look together on what’s the best way to take them widely into use in the fast manner.

Christina Cardoza: Great. And, Anthony, any final thoughts or key takeaways you want to leave our listeners with today?

Anthony Sivesind: I’ll echo what Russell and Jani are saying here. We’re ready now. So, we have the technology, we are ready to help utilities in any way that we can to train and learn and bring their teams together. So I would say, please take us up on that opportunity. Let’s work through this together.

Christina Cardoza: Absolutely. And, Russell, what would you like listeners to get out of this conversation and leave with today?

Russell Boyer: If we’re going achieve these ESG targets, we really have to accelerate the deployment of new technology. That’s the key message from my perspective. And Dell is committed to developing the latest technology to be able to deploy that today.

Christina Cardoza: Great. And, Prithpal, please lead us out of the conversation.

Prithpal Khajuria: Yeah, I think, Christina, my message is to the utilities: technology is ready. Let’s put a migration plan together—how we can walk you through the journey of a pilot or a proof of concept to a field pilot to a deployment—that migration plan needs to be stitched together, and Intel and its ecosystem partners are here to help them.

Christina Cardoza: Well, I can’t wait to see what else you guys all do in this space. I just want to thank you all for joining the IoT Chat today. And I urge and invite all of our listeners to keep up to date, visit Dell, ABB, VMware, and Intel’s websites to follow along how they’re making strides in this space, as well as the insight.tech website as we cover these grid-modernization efforts today. And in the future. Until next time, this has been the IoT Chat.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.