HCI Simplifies Hybrid Cloud Management

The hybrid cloud gives organizations the best of both worlds—leveraging the public cloud for its flexibility and scalability while keeping applications that require more security, reliability, and availability in private clouds or on-premises environments.

But managing environments that combine multiple clouds can get complicated. As the infrastructure expands, organizations can end up with the same problems as legacy environments—application silos, cost overruns, and lack of skill sets and tools to adapt to rapid technology changes.

Data availability, cloud latency, and security concerns also can get in the way of a successful hybrid implementation. Collectively, these problems can slow down digital transformation and adoption of new technology.

But they don’t have to. Implementation of a hyperconverged infrastructure (HCI) addresses these issues by optimizing hybrid environments. HCI delivers availability, simplicity, and operational efficiency by virtualizing all the elements of the data center—storage, compute, networking, and management—in a consolidated package that can be managed on-premises or in the cloud.

“Enterprises are of course looking to consolidate costs in multiple ways,” says Harshad Kolte, Senior Product Marketing Manager at Lenovo, a leading provider of data center infrastructure solutions. “Having everything in a single infrastructure means they can have data consolidation, skill set consolidation, and optimized costs.”

Infrastructure management becomes simpler, says Christopher Brown, Product Management Team Leader for Solutions within Lenovo. “We’re trying to get rid of islands—islands of automation, islands of storage, islands of skill sets. Otherwise, you really risk your infrastructure fragmenting.”

A New Approach to HCI

When first introduced, HCI was used in limited settings, such as edge deployments and in test and development phases, but it is now being used in other environments, such as edge sites and remote offices. “It had some specific niche use cases at the start,” says Brown. “It’s now maturing. It’s becoming more broadly used.”

While some HCI software providers focus on simplifying on-premises infrastructure, Microsoft is taking an approach more suitable to hybrid cloud environments.

For example, Microsoft’s Azure Stack HCI offering has native Azure services support to connect on-premises infrastructure to the Azure public cloud. And Lenovo has partnered with Microsoft to offer a unique hyperconverged solution, ThinkAgile MX.

Deploying the @lenovous ThinkAgile MX #HCI solution allows organizations to focus on developing business applications and consuming #cloud services instead of having to manage infrastructure. via @insightdottech

“In that scenario, we’re talking about customers who probably are reasonably happy with a public cloud, but know there are things they have to keep on-premises,” says Brown.

“The Azure Stack HCI solution,” says Kolte, “mirrors on-premises environments in the cloud. You run exactly the same thing on-prem as in the cloud. The two work together, and instead of doing it at the cloud infrastructure level, you’re working at a hybrid cloud, application-to-application level. So a lot of people will use this HCI implementation rather than some of the others, particularly because of applications they’ve picked that happened to run on Azure.”

“It’s a true hybrid cloud approach”, Brown says. “If you’re in an Azure Stack HCI cluster, you can move VMs flexibly on and off to Azure.” In a different scenario, moving applications to the cloud would require extensive rewriting for cloud enablement.

Focus on Applications

Deploying Lenovo’s ThinkAgile MX HCI solution allows organizations to focus on developing business applications and consuming cloud services instead of having to manage infrastructure. The solution leverages Intel® infrastructure technology to provide a turnkey experience.

It allows companies to standardize and accelerate development and deployment of applications across their environment while accessing Azure services from the safety of their own data centers.

“The solution,” says Kolte, “is made to plug easily into the customer’s environment. Built into Azure Stack HCI is Windows Admin Center, which has a simple, wizard-based interface that allows the solution to be integrated on the customer’s on-premises setup.” The Lenovo offering is an integrated system with hardware, software, and support services to ensure compliance.

 As such, the solution provides an ideal fit for managing branch office and remote edge environments, he says. Among other functions, organizations can use it for backup and recovery and for what Kolte calls “general virtualization.”

“What I call general virtualization is: I have my legacy applications and they have not been written to be cloud native. I know I want to go there, but I can’t yet, so I want to virtualize them but still also have them on-prem,” he says.

One area where the Azure Stack HCI approach is making a difference is manufacturing, says Brown. A traditional HCI hub deployed on a factory floor would have handled a specific function but lacked built-in analytics capabilities.

“People may want to have a small two-node cluster sitting in each manufacturing site,” which HCI would deliver, he says. “But we can coalesce the information and gain deeper analytics through the Azure ecosystem, along with Lenovo. For example, we can leverage Azure hybrid services with ThinkAgile MX and Azure Stack HCI.” The connection to Azure Stack, he says, enables analytics on data collected from the production site.

The solution also lets organizations leverage Microsoft’s new Azure Arc platform, which allows customers to manage multiple sites and clusters across multiple clouds in a consistent fashion when managing upgrades and enforcing systemwide policies. This makes it easier to get a complete picture of the manufacturing environment and make decisions to improve operations.

Simplified Management

Typically, administrators use the Windows Admin Center for management, Kolte says, but points out that XClarity is fully integrated with the Microsoft management tool, giving IT managers the flexibility to manage Lenovo servers as single hosts or as Microsoft Windows clusters.

Customers don’t have to worry about operating system and software updates because Microsoft manages them remotely. Microsoft keeps security up to date, relying on a team of 3,500 security engineers to address cyber risks.

The solution also leverages Intel’s latest hardware-enabled security. Brown points out that Lenovo’s hardware manufacturing process itself is secure, saying that Lenovo has done rigorous security assessments and controls its entire manufacturing chain, eliminating the chance of security issues introduced by a third party.

Brand Value

The Azure Stack HCI solution pulls together some of the biggest technology names to deliver a turnkey solution for hybrid cloud environments. Besides delivering on-premises-to-cloud flexibility, the solution comes with preconfigured, pre-tested NVIDIA Mellanox networking switches and graphics cards (GPUs) that accelerate graphics rendering.

As a certified Intel® Select Solution for Azure Stack HCI, Lenovo incorporates Intel’s memory and storage technology, Intel Optane Persistent Memory and SSD technology. With Intel® Optane, the solution can handle more capacity than DRAM at a lower cost and deliver 30% more virtual machines. Intel Optane SSDs operate at the cache tier of the select solution, offering faster caching and storage of data, resulting in a balanced TCO and performance improvements.

Having such well-recognized names involved in a single solution, says Brown, delivers customers the “confidence of working with the industry leaders.” Customers can count on reliability and top-notch engineering, he says.

Kolte agrees, “Lenovo has been number one in reliability for our servers for the longest time. Microsoft and Intel together work on bringing Azure Stack HCI solutions, and the networking comes from NVIDIA’s Mellanox. We have pre-tested configurations with all those components. That has value and the brand names of course speak for themselves.”

ABB Talks Smart Grids, Substations, and Security

Jani Valtari, smart grids, substations, security

[podcast player]

Why are smart grids suddenly so hot? In short, technology has hit a tipping point, and the digitalization of the grid is dramatically accelerating. Good thing, too, because the growth of renewable power, emerging security threats, and increasing government intervention have all complicated grid management.

In this podcast episode, we explore how modern substations can leverage digital innovation to provide more resilient, sustainable, and secure energy distribution—and make the transition smooth and reliable.

Our Guest: ABB

Our guest this episode is Jani Valtari, a technology center manager with industrial digitalization leader ABB. Jani’s focus is on making electricity distribution as reliable and as secure as possible—and helping the decarbonization of society and improving sustainability. Currently he’s in charge of the R&D activities in this area in Finland.

Podcast Topics

Jani answers our questions about:

  • (01:37) The continuing evolution of the power grid
  • (11:25) The role of substations in advancing the grid
  • (13:48) Thwarting cyber threats with a layered approach
  • (16:33) The value of technology partnerships and customer collaboration
  • (29:54) How to make the transition smooth and reliable
  • (35:39) How the modern grid can reduce our carbon footprint

Related Content

To learn more about the digitalization of the smart grid, read Safeguarding the Grid with IIoT. For the latest innovations from ABB, follow them on Twitter at @ABBgroupnews. 

Apple Podcasts  Spotify  Google Podcasts  

Transcript

Kenton Williston: Welcome to the IoT Chat, where we explore the trends that matter for consultants, systems integrators, and end users. I’m Kenton Williston, the Editor-in-Chief of insight.tech. Every episode I talk to a leading expert about the latest developments in the Internet of Things. Today I’m talking about the power grid with Jani Valtari, a Technology Center Manager from ABB. I want to know what’s going on with renewables. How can we deal with cyberattacks? Why is digitalization speeding up? And how will all these changes impact the industry? But before I get to those questions, let me introduce our guest. So, Jani, welcome to the show.

Jani Valtari: Thank you, glad to be here.

Kenton Williston: So, what is ABB, and what is your role there?

Jani Valtari: ABB is a big global company doing many things for the energy side for electricity—production, consumption, and distribution. And I’m working for the division of electricity distribution, where we do a lot of work to make electricity distribution as reliable and as secure as possible, so that we are helping the decarbonization of the society and improving sustainability. My personal role this time: I’m in charge of the research and development activities in this area in Finland. So we are developing and creating new technical solutions, so that the security and reliability of the network, and even further, can be improved, and that we bring a new kind of flexibility and reliability also for future needs.

Kenton Williston: What are the important ways that the power grid has evolved over the last couple of years, and what’s behind the latest and greatest of those changes?

Jani Valtari: Yeah, it’s indeed a very interesting time. I think during the past couple of years there have been more changes than in the previous two decades. So many things happening, and, the way I see it, the one big driver is the decarbonization of our society and fighting climate change. It is, of course, one aspect which addresses all parts of the society, and electrification of our society is one big tool to really support in this, because electrical energy is one of the few flexible forms of energy that can be produced with zero carbon footprint. So it’s really one of the great tools to address this challenge. And we see many things, then, now happening because of this. It can be on the production side—we see distributed renewable energy production. We see in the consumption side—demand response, controllable consumption, all kinds of new devices.

We also see new active components connected to the grid. It may be energy storage. We have started to see electrical vehicles and other similar components. So many quite interesting changes. And I think, on top of that, one key thing is digitalization, which is really giving a lot of speed for this change and making it possible for change to happen at quite a fast pace.

Kenton Williston: Yeah, for sure. I should say, actually, I have a degree in electrical engineering, and in school you’re basically learning the same stuff people had figured out in, like, the 1910s, 1920s—not a whole lot has changed in this side of the technology.

Jani Valtari: So, yeah, one of the fundamental things that I see now changing is that, earlier, the production had been very flexibly adjustable, and the consumption had been able to move quite a lot. But now, when we see a lot of changing, intermittent, renewable energy production, now we need to make the system work in a different way—in a way where consumption is controlled, and where we have other balancing devices, like energy storage.

Kenton Williston: For sure. And again, just going on and on about my own life—like, here in California right now there’s a big push to reduce consumption during certain hours of the day. I think it’s becoming really front and center to everyone’s thinking—whether you’re a government or a nonprofit or even just an individual consumer—it has really become a very prominent part of our lives.

Jani Valtari: Yeah, it is. I think it is happening in many fronts. It’s happening in the individual fronts—consumers. Customers are getting very active. It’s also happening on government fronts. Like government regulations are more and more pushing towards the same direction. So there are many, let’s say, many powers that are then impacting the same direction, and I think that’s also increasing the pace of change. And when we at the same time know this, then the available technologies to realize this change are getting more and more mature. I think all of these add up, and then we start to see quite large and dramatic changes.

Kenton Williston: Yeah. And I think that point you made about the digitalization really is a key thing that’s starting to push things forward even faster than before. And this is something you see everywhere in the industry that digitizes—suddenly the pace of innovation becomes very much faster. So what do you see coming next in the grid?

Jani Valtari: Yeah, it’s a very interesting question. First of all, I think that rate of change will further increase. I think the main reasons I was already discussing, that there are many powers pushing towards—it may be it’s individual, consumer behavior, government regulations. On the other hand, we start to see mature technologies that can be used. So all these together—it will increase the rate of change. And I think one thing that it will add is, let’s say, add a flavor of unpredictability also.

When I look a few years back, what we estimated for the growth of solar PV or wind power or even electrical vehicles—the prognoses five years back were way too modest. Everything has happened much faster than we thought. And I expect these kinds of, let’s say, unexpected things to continue. So whatever we now prognose for next five years, I expect the change to be even faster than that. And that’s another reason why we really need to take digital technologies very seriously, because they are, at the moment, the only technologies that are flexible enough to really accommodate this and make this transition happen in a smooth and in a reliable way.

Kenton Williston: Yeah, for sure, that flexibility is really the only way to prepare for the unknown. So, what does that actually look like? How are governments and other regulatory bodies, consumers, all the rest—what are they doing to actually modernize the power grid, digitalize it, and make it into something that’s much more flexible?

Jani Valtari: Well, I would say there are multiple things happening on different fronts. Of course, what we now see just lately during past months are these different kinds of governmental investment packages and financing schemes that are directly focusing on clean transition and digitalization. So there’s also a lot of investment coming to this area.

On the other hand, we see that there are also government regulations putting even more stress on the reliability of electricity. We see that electrical energy, the share of electrical energy, is growing. It means that it’s becoming an even more critical part of society. So it’s even more important that it’s very reliable and robust. And what we see, for example, in Finland is that the penalty costs for not distributing are getting higher. The security-of-supply requirements are getting higher. So it’s even more important that our electricity network is very reliable.

So these kinds of regulations are really targeting towards both clean transition and also reliable electricity distribution. I think these are the few main things they’re really pushing for—this change. Also what I see is about the technology getting mature. There are certain technologies that have been used in different areas. Virtualization is one example. Now when they have become mature enough they are getting more and more used in the energy sector, because they are ready and because we have been seeing that they are reliable and good enough for meeting these targets. So, also, technology availability is one of the key drivers.

Kenton Williston: So, what does that look like in practice when you say “virtualization”? When I think of, say, like a data center, I know what that means. So you take what used to be a physical server and turn it into a logical instance that lives as just one piece of software that’s running on a physical server; so you can cram a bunch of what used to be separate servers on one physical piece of hardware. But I’m sure that’s not what you exactly are talking about on the grid! It’s not like you can take your switches and transformers and somehow squeeze them down. They have to be what they are. So what does virtualization mean in this context?

Jani Valtari: Yeah, when we talk about energy or the electricity grid, we often talk about the primary equipment and the secondary equipment. And the primary are these switches and power transformers and the devices that actually integrate either opening the circuit or closing the circuit—transforming the energy from one form to another. But then the secondary system is the intelligence part on top of it, which is monitoring all the time on the millisecond level, and all the time checking if things are in good order or if there’s a fault that you need to react to.

So the secondary system, this intelligence layer—that is an area where the software orientation is growing very rapidly. So there’s a history of having a lot of separate, smaller devices with some level of intelligence. Now we see a trend towards having more standard devices where the customization is done more on the software level. We see the introduction of more like PC technology, or industrial computing technologies. Then, on top of that, we see a trend of going from, let’s say, bare metal or these kinds of native software solutions towards more virtual images that can be run in many places.

We often talk about topics like location transparency. You often need to have—earlier, you had to have certain functionality running in one specific place—maybe close to the circuit breaker, close to the place where you needed to operate. And now, with the communication technologies, other technologies are evolving—you can actually locate this functionality much more freely. You can put it a bit further. You can put it to some server; you can put it, in some cases, even to the cloud, depending on what kind of infrastructure you have available. So it’s really separating the physical and digital world even more, and then giving much more freedom to quickly react to any kind of change that may happen.

Kenton Williston: Got it. So it’s really about, like you said, taking the measuring, analytics, decision-making parts—kind of decoupling those from the underlying physical infrastructure, so that you can update what that logic does more readily, or, like you said, even just move it around.

Jani Valtari: Yeah, exactly. And it is really giving us a lot of flexibility, and a lot of tools to address this big change that we see coming.

Kenton Williston: So can you give me a practical example of what you’ve done with some of your customers to implement this kind of technology?

Jani Valtari: Well, one practical example is, let’s call it the discipline substation, where we often, in traditional formats, have multiple different protection relays that are doing specific protection functionality for a specific component. So we made a pilot where all this functionality was put into one device as a pure software product that we then could update in real time in very fast pace, bringing new functionality without doing any changes on the physical level. So there’s this kind of—and we have multiples of those kinds of pilots where we really go from multiple separate devices into one—or two, if we want redundancy—where all the customization is done; the software and the hardware is always the same. So these kinds of pilots in different formats—we have done quite many, and we see a lot of benefits there.

Kenton Williston: So it sounds like the substation is actually a pretty important player in how this will be rolled out. Why is that?

Jani Valtari: The substations have been, and I think they remain to be, very important hubs of the electricity grid—similar to like we see data centers are the important hub of the information network, base stations are the important hub of the communication network. Substation is and will be a very important part of the electricity grid. And as we see society getting more and more dependent on electrical energy, it’s even more important that the substation is in a very good shape and performing well, having a very good and elegant selective protection and performing optimally.

Kenton Williston: Got it. So do you see the function or the role of substations changing significantly as we move forward? Or is this just a matter of making them super reliable and effective?

Jani Valtari: It’s a good question. Maybe I don’t see a dramatic change in the role. Maybe I see it more as an emphasis of the role and then a growing of the role. It has been an important part of that already earlier, and we see that it will remain very important. So maybe not necessarily changing so much, but just to emphasize it.

Kenton Williston: Got it. So, one thing that certainly comes up as I think about things being digitized is cybersecurity. And so this is something that has been more and more in the news. Of course, there is a pretty big event here in the states with the Colonial Pipeline. And this is certainly a concern for the grid, because, like you said, our society is becoming so dependent on electricity for everything. So how do you see these threats evolving, and what should the industry be doing to deal with these threats?

Jani Valtari: Maybe I start by saying that I think this is, let’s say, one aspect of the very interesting paradox, if I call it that. That how to combine the robustness and flexibility we, on the one hand, want to have this electricity network and substation as flexible as possible because we see the growing need for change. On the other hand, we want it to be super reliable, super robust. Often these are a bit of a trade-off. So you either want to keep things very stable and robust, or you want to give some flexibility and allow for some deviations. But now we want to do the same things at the same time. So that is one of those things. And I think cybersecurity is one aspect of this.

So we want flexibility, we want digitalization, we want remote-update ability, remote reconfigurability, —but all this needs to be very cybersecure. And then, how to achieve it? Again, there are many things that the industry is doing, together with our customers. So, of course, one thing is to participate very actively in standardization’s latest activities.

We have increased our research and development efforts, increased a lot of testing, bringing new technologies into use. Also the update ability and flexibility of the system is giving tools, in that sense that, whenever something happens, or we notice some need for change, we can deploy cybersecurity improvements all over the network at a very fast pace.

So this kind of technical evolution and standardization is important. Testing is important, but a lot is also, let’s call it, social or psychological items. So we need to increase the awareness and then share best practices, and really have a solid working model in different places. Often the technology only takes you so far, but then you need certain, let’s say, personal guidelines and rules that people really respect and that go according to certain guidelines. And then keep the cybersecurity on the high level and also keep eyes open when the system is used and unsupervised.

Kenton Williston: One thing that I’m definitely hearing here is that this is largely a matter of bringing best practices to the grid, that already exist in the IT world. Actually, this entire conversation—really the idea of virtualizing and having functionality that can live anywhere from the Edge to the cloud, and having an approach to all of your infrastructures that really thinks about manageability and especially remote manageability—these are the sort of things that have been talked about for a long time in the IT space. So it seems a pretty critical part. And, for that matter, you talked about standards and interoperability. Again, these are all things that have been talked about in the IT space for a long time.

So it seems like a big part of what you’re saying here is that this is a matter of bringing existing best practices from the IT space, and merging those into the older way of doing electrical grids. And that’s really how you get to this new digitized, agile kind of grid. Is that right?

Jani Valtari: Well, to a large extent, yes. Yeah, there are some aspects that bring certain IT—the IT procedures and practices—to electricity side and electricity distribution side. But I would say also in that sense that many items cannot be, in a way, copy-pasted from one domain to another, because the requirement and the environment is different. And, for example, one item is this reliability requirement. Often when we talk about some IT systems or cloud services or other kind of areas, if we talk about availability of 99% or 99.99%—something like that sounds like a good number. But, actually, when we go to protecting electricity network, it’s like nothing. So it’s super unreliable. So we need quite many amount of nines in the valuation. Certain fundamental items in the criticality are such that it really—you need to somehow not just copy-paste, but really seriously consider what aspects to take, and where we need even stronger requirements, and then new rules. But to many extents, it is like you said—that many areas that are now used in IT will be moved also to electricity.

Kenton Williston: That makes sense. What is this—sort of halfway, in-between, sort of IT-ish, but also keeping in mind that very long string of nines—reliability; how is an ABB making that a reality? What are you doing in your own products to deliver that sort of conflicting set of goals?

Jani Valtari: It’s a very good question. So, of course, one of the things for a company like ABB—where the technological leadership is a very key thing—is to be at the forefront when taking the new solutions into yours. So we have been doing this with, for example, the software-oriented, centralized protection solutions. We have been introducing new, flexible, all-in-one devices, where one device can be software-wise customized for many different purposes. So this is one of the areas where we are active.

Another aspect is, of course, let’s say, the bring-in—the reliability requirements in terms of the whole development process and testing of the device—both as one device, or as part of the system. So this kind of very serious and thorough testing and evaluation, and also being an active part in the standardization activities—what I mentioned earlier. So that’s one of those things.

Maybe another item that I would highlight is that it’s very important to be in close cooperation with our customers, and with the whole ecosystem, very early on in the innovation chain. Often, when we start developing new solutions, if something is just copied from somewhere—from the IT side or entertainment or certain consumer-electronic side—without really very early on getting together with the innovation partners and customers evaluating the critical parts of the solution, we might not get, let’s say, a good enough solution; we might not get a reliable enough solution. Very close collaboration with the innovation partners early on—that’s a very important thing that we do, and we remain doing, so that these solutions, when they are moved to the energy side, they are really taking in that flavor that the reliability and security is in the high level.

Kenton Williston: That makes sense. And it really, I think, speaks to the earlier point you were making about standards and open, shared collaborative approaches. Is that the key to all of this? Is that this stuff is really complicated and hard to do, so it doesn’t really make sense to go off and have your own bespoke approach? Everyone needs to work together across the industry, across customers. People need to be coming together with governments. Everybody really needs to be pulling together on this effort to make it fully successful.

Jani Valtari: Yeah, exactly. And, like I was talking about one paradox a while ago, about how to combine robustness and flexibility—I think another kind of similar interesting paradox is that, of course, the energy side and electricity network, let’s say, the privacy and security requirements for the data are very high. And it’s a closed system, where you need to control fully the whole environment, and be very sure who can access which data and who cannot access it.

Then, on the other hand, you need to have open innovation collaboration, so that you can really develop new solutions the best possible way. So, similarly, have a very closed, super-controlled environment, but open collaboration, innovation networks. That’s one of those, let’s say, interesting, nice paradoxes that we also are at the moment solving together with other technology providers, and then our customers.

Kenton Williston: One of the things that you said struck me as being pretty important to this, which is the idea of having a single box that can be used for many different purposes. So, what I’m thinking here is there’s sort of a foundational level of the technology where you need something that’s extraordinarily reliable, extremely secure, very easy to manage remotely. And that’s all really tough to do. But if you can build such a box, then the software you put on top of that—that’s where you can add that, and flexibility. So it’s really—you get a start with a really solid foundational layer in that really robust hardware, and a basic software layer on top of that OS, and so forth. And then you can innovate on top of that, but you need to have that extremely well-proven foundation first, before you can really do anything.

Jani Valtari: Yeah, that is totally right. Also, another item is that, just for the device—what the life cycle of the device is, and then how robust the devices are physically. If we talk about the data center example, or another similar example, they are often in a place where, if a device breaks down, you have service personnel nearby—maybe even on the site—who can then replace it. But when we go to the electricity grid, it can be that the device is somewhere very far, and some maintenance person needs to drive a long way—and you have it all over the network, not just in one specific place. So it’s very important that the device and the electronics and the physical reliability is on high level.

But I would add one more point also there, that I think it’s not enough to say that you have very reliable hardware, and then you can just put your software on it. We similarly need to think about software in layers. We need to have a very robust, let’s call it,digital backbone, or the bottom level of the software, which is also keeping the software integrity on a high level. Because when we go to this kind of area where we are maybe updating remotely, reconfiguring, doing changes in the live system, which is all the time securing and protecting the network, we similarly need to have a good, layered approach to software, so that we know that there are certain aspect that it’s all the time robust and reliable, even if some other part is changed. So it’s bringing quite interesting, let’s say, needs and new requirements only if you want to look at software.

Kenton Williston: Yeah. This brings me back to the idea of virtualization. There’s things you can do, like virtualizing the software that’s running on these boxes so that you can isolate different functions from one another. So that if someone does, for example, gain access to the system that shouldn’t have it, and wants to wreak havoc, they can only get so far into the software. And so, to me, it seems like an irony of all the ironies we’re talking about that really one of the things you can do to make your system most secure is don’t have these sorts of highly custom boxes because, yes, they are hard to understand and hard to get into, but once you do get into the system, then it’s wide open from there. Whereas, if you have a system that has been built to be flexible, you can, from the start—you have thought about these security layers and are making sure that if there’s a breach in one element, that’s as far as it gets.

Of course, to things like the remote management—I think one of the things that’s really neat about Intel technology—and I should mention that insight.tech and the IoT Chat podcast are productions of Intel—but part of what’s really cool to me about using Intel technology for this is they’ve got things like their vPro suite, where you can remotely discern if someone’s gained access to the system. So I’m going to isolate this—that’s as far as they’re going to be able to get. I can reset it. I can do whatever I need remotely. So you can address these sorts of breaches very quickly.

Jani Valtari: Yeah. We also at ABB have had very good and fruitful collaboration with Intel on these topics. And, like you mentioned, virtualization is one of those items that brings this kind of a layered approach, and improves the isolation of functionality between different functions. And I must add that, of course, the cybersecurity, and then somebody breaching or attacking, is one of the attack vectors—one of the possible areas where things may go wrong. But even there are also others, like, let’s say, less dramatic—or there can be some human errors, maybe certain software—there is a certain software bug, or some configuration is done in the wrong way.

Or we have seen in the past that the system is operating very well, but there is one unexpected event in one part of the network, and it can create a kind of a chain reaction that causes unwanted effects somewhere else if we don’t have a system that can be then, let’s say, updated. Or then where different parts can be securely isolated from each other. So this kind of layered approach isolation, really logically thinking on how to separate different functionalities—it’s very important. And virtualization is one good technology to achieve that.

Kenton Williston: Yeah. And I’ve been talking a lot about how this would look from an IT perspective. But you mentioned earlier—this is not the only industry where this sort of digital transformation is happening. And in one of the others—you mentioned the telecom sector really has a lot in common with the grid in the sense that it needs to be very reliable. There’s this big move from the old way of having very customized hardware, to moving towards more standardized hardware that just kind of looks generically like an industrial PC. And I think a lot of these same principles we’re talking about—virtualization and security and remote manageability and ability to innovate more quickly—are very present in the telecom sector. And imagine the sort of innovation that’s happening now across different sectors—whether it’s the grid or telecom or industrial automation or whatever—there’s a mutual shared benefit from the activities happening in all of these different sectors and the learning people can share.

Jani Valtari: Yeah, definitely. There are a lot of similarities and the same kind of trends. And I think it’s another factor which can even, let’s say, increase the pace because of innovation. Because when many different industries are, let’s say, investigating the same approach, then one good solution developing one industry can be quickly taken over by another one. So it also really can create a lot of, let’s say, fast changes to society.

Also, about these different domains like telecom and energy—one other interesting item that they also get—they are very dependent on each other. There is strong interdependency. For communication to work, you need electricity, and for the power grid of today and the future to really work very well, you need very solid communication. So there’s even very strong interdependency. And often the hops—what I mentioned, a substation, the electricity network or base station, the communication network—they actually need each other quite a lot. And there are often interesting discussions that should theyactually, at some point, be the same entity.

Kenton Williston: I hadn’t thought about that before, but, yeah, that totally makes sense. So, with that, I want to get a little bit of your perspective on what you think the next steps are for our listeners of this podcast—what are some of the key things they should be thinking about as they’re considering how to modernize their infrastructure, update their substations, and so forth.

Jani Valtari: Yeah, it’s a good question. Maybe I take one step back, and then I think that the one important part on the national level, political level, is the governmental regulations. Often they tend to guide quite a lot on what, for example, utility distribution system operators do or do not do. So it’s very important that these governmental regulations are really supporting taking new technologies into use—are supporting innovations—and are not promoting, let’s say, old fashioned ways of addressing challenges.

Apart from that, I think one important thing is to be, maybe, I call it again a bit of paradox. I said that it’s important to be open for new technologies, digital technologies—it being in a way brave to take them into use. But also have really hard requirements for the security and reliability, and do this with partners that you really trust, with partners that have zero tolerance for low quality or insufficient testing. So it’s important that you are open towards new technologies, but you do it in a very controlled manner with zero tolerance for low quality. And you do it with good partners that you trust that take the security and reliability very seriously.

Kenton Williston: Yeah, I totally agree. And, again, I think—especially thinking about the broader ecosystem, we’ve got some articles already on insight.tech about the work that ABB and Intel have been doing together to create these kinds of solutions. They have the sort of features you’re talking about. And I think that the—and you speak for yourself—but I think the partnership between the two companies assuredly has helped move these solutions forward.

Jani Valtari: Definitely. The partnership with Intel has been very good, I believe—good for both. So we have been making very nice pilots in terms of taking the substation functionality to a totally new software-oriented level, going towards digitalization, going towards virtualization, going towards digital technologies. So, definitely, it has been a very good collaboration.

You touched on the ecosystem—it’s sometimes a bit of a hype word, but it has a meaning. Of course, now we have talked quite a lot about technologies, and then me coming from research and development, that’s something that I maybe do even too easily—but it’s important to think that it’s not just about technology. It’s about the networks. It’s about ecosystems, which are not just technical ecosystems, but actually business ecosystems. And then, the way I see that when these technologies have all—when we see a converging of our IT and energy side, we see utilities and telecom operators getting closer to each other. It also means that the business landscape is changing, and we will see some changing of roles. We might see some new roles who will manage specific digital aspects of the electricity network, bringing some new digital services to the energy network, after it has to be first created in a way that there is a really secure layer that this access can be granted.

So I also see that there will be new roles and new kinds of business ecosystems that will further change the game or change the landscape, but also increase the speed, and bring new players that can bring new innovations to the area.

Kenton Williston: Yeah, for sure. And I think, even just on the basic level of the talent available to move these innovations forward as these different domains become closer together—utilities and telecom and IT and industrial automation and everything is converging, in a certain sense, on some of the same ideas. I think one of the great things about that—it means that there will be people who understand different business models, people who are able to execute different kinds of software models—all these sorts of things—will be able to cross over more freely between these worlds. And I think there’s going to be this cross-pollination, for sure—who’ll unleash some really exciting new talent and ideas.

Jani Valtari: Yeah, it will for sure do that, but it will also, I think—it will push many people to think a bit differently than earlier. So it is one thing to—let’s say, changing of roles or new roles. One thing I also think, that maybe the meaning of certain words will change. We talk a lot about, let’s say, protection, reliability, or resilience, if I may. I also feel that the meaning of resilience and security—it will also change a bit and broaden, because—earlier when we talked about resilience or protection, it has been just, let’s say, electrical protection. It has been like, how to react to the network faults.

Now, the past few years, we have been talking a lot about cybersecurity and how to protect IT systems. But what I see now in the future, when these things get closer together and we see new ecosystems, we also see a rate of change in how the system is set up. So the resilience or protection will also mean how we combine this paradox that I mentioned earlier. So, how do we give flexibility and adaptability—robustness to the environment—while at the same time allowing maximum flexibility? So, there’s also different kinds of what I call resilience, what we haven’t earlier considered as resilience, but now it will also come. So, I think also the meaning of many words will broaden, and the security and resilience will mean a bit more, if you understand what it means today.

Kenton Williston: Yeah. It’s going to be a lot more than just having very good locks on your substance.

Jani Valtari: Yeah, exactly. Yeah.

Kenton Williston: All right. Well, with that, I just want to give you an opportunity if you’ve got any kind of final thoughts you’d like to leave with our audience—to give you a chance to share that.

Jani Valtari: Well, maybe I’ll say a quote that I stole from a colleague, which is a good quote. He said once that the electricity distribution, the electricity side, it’s a good business. And in that sense, I mean that we are seeing quite big challenges in terms of climate change and decarbonizing our society. And the electricity side is on the good side of this. So it’s really—it’s a good tool that could help in dramatically reducing the carbon footprint of our activities at the moment.

So green electrification is a key item, one of the key items in bringing our environmental impact to a lower level. And we have a lot of global challenges now in this area, but I also believe that we do have the necessary building blocks to make the change. And I see, let’s say, governmental changes, and, let’s say, attention moving that this is also becoming a possibility—to also address it. So I feel positive also in that sense.

Maybe the concluding remark is that to really make this shift in a good, reliable way we do need, let’s say, systemic resilience—a systemic approach to how to keep a system secure—but we also need to have a good, open collaboration. We have it now with ABB and Intel, and with many other utilities and customers. And I think this is a good path, and bringing a lot of good building blocks for carbon-neutral energy systems.

Kenton Williston: Excellent. I’m very excited to see where things go next. But, for now, let me just thank you for joining us.

Jani Valtari: Thank you very much. It was really an honor, a pleasure to be discussing about this very interesting topic with you.

Kenton Williston: Thanks to our listeners for joining us. To keep up with the latest from ABB, follow them on Twitter @ABBgroupnews, and on LinkedIn at linkedin.com/company/abb. If you enjoyed listening, please support us by subscribing and rating us on your favorite podcast app. This has been IoT Chat. We’ll be back next time with more ideas from industry leaders at the forefront of IoT design.

The Future of Work Is Hybrid, Digital, and More Personal

As businesses prepare to return to work post-COVID, they will have to consider their employees’ new work expectations. After working from home for more than a year, employees have gotten used to not having to commute, working independently, and having more time to spend with their families. While some people are excited to get back into the office, others want to continue to have remote work as an option.

Businesses are also seeing the benefit of having employees work from home, having to spend less on overhead and seeing an increase in employee productivity.

“Over the past year, organizations of any size have seen the benefits of how a remote workforce can maintain business, but also have a better work-life balance. As such, most organizations are seeking ways to continue to embrace this new norm in an attempt to both retain and recruit talent,” says Bob Bavolacco, Director of Technology Partners Programs at Crestron, a global leader in workplace technologies.

It’s becoming increasingly clear that the future of work will be hybrid, with businesses offering employees the option to return to the office, work remotely, or a mix of the two (Figure 1).

inclusive future of work, hybrid workforce, hybrid collaboration, distributed teams
Figure 1. Most company leaders intend to let employees work from home at least part time. (Source: Gartner)

Enabling Employees to Work from Anywhere

One of the biggest challenges for businesses and employees when the pandemic first hit was trying to adapt to a remote world. Businesses needed to figure out how to get employees the right tools and technology necessary to do their work, collaborate, and communicate—all while ensuring security. Employees needed to adapt to a new workspace, mindset, and set of responsibilities.

With a portion of employees now headed back to the office, the new challenge will be ensuring that everyone gets the same work experience no matter where they are working from.

“While almost all organizations immediately needed to go remote at the beginning of last year, we did what we needed to do to sustain how we conducted business in a fully remote workplace. I think the theme during that period was focused on remaining as productive as possible remotely, while doing that ‘transparently’ to all of our internal and external constituents” says Bavolacco. “Now that the world is opening up again, we still don’t quite know what that means, but realize in this new hybrid workplace, wherever we choose to work on a given day needs to provide the same professional, connected experience regardless of location.”

One thing is certain, according to Bavolacco: Businesses will need to adjust budgets and resources to give employees access to tools and ensure those who are working remotely feel connected.

“If businesses don’t provide the tools or platforms, you’re not going to see productivity. It’s going to fall. That’s essentially what we’re up against,” he says. “How do you allow collaboration across platforms, across demographics, or geographically dispersed locations without seeming like that’s the case?”

Employees need to be able to access team channels, project directories, emails, and be able to instantly communicate with one another without having to schedule a meeting.

The cloud will be paramount to providing a reliable, scalable, and secure way to keep distributed team members together and on the same page when a secure VPN is not available. “Without a cloud-based infrastructure, the maximization of employee productivity and the total efficiency of the systems that support the various organizational needs could never be realized,” says Bavolacco.

“Organizations that are adapting to the new [work] norm need to provide the tools for collaboration to continue, both in-person and remotely.” –Bob Bavolacco, @Crestron via @insightdottech

A standardized and unified communications tool and platform also needs to be implemented across the organization to ensure team members stay connected and can retrieve and share information in this new hybrid work world. In addition, the platform should be able to support other platforms so employees can connect and communicate with people outside of their organization who might be using different collaboration platforms.

For instance, the Crestron Flex platform is a unified communications solution that provides native support for Microsoft Teams, Zoom, and other popular conferencing platforms, as well as support for a BYOD conference solution. This provides employees consistent call, presentation, and video conference experiences on all these devices, no matter where they are, what platform they chose to use, or who they are talking to. The platform can connect with advanced native room systems such as digital displays and high-quality audio technologies so conference members can clearly see and hear one another.

Organizations can also implement a variety of Crestron Flex devices, or allow employees to use devices that they are more comfortable and familiar with Flex. And with Crestron’s XiO Cloud, organizations can deploy, manage, and monitor thousands of devices remotely through a single dashboard.

Redefining and Redesigning Workspaces

Beyond connecting and providing a consistent at-home and in-office work experience, businesses will have to rethink how employees work, engage, and interact when they do come into the building. There are now social distancing, occupancy limits, infrastructure, and health considerations to think about.

Organizations realize they have an opportunity to reduce their office footprint and are starting to investigate new ways to optimize the space. Crestron provides hardware and cloud services that handle remote operations as well as room monitoring. Crestron also provides a home-version of their unified communications solutions called HomeTime.

Through Crestron’s services, employees can see room availability, book a room, or schedule a cleaning service remotely. This will allow organizations to set up temporary workspaces for their employees and keep track of cleaning schedules. Bavolacco describes it as creating a temporary hotel environment in place of traditional offices.

Crestron’s suite of in-room conferencing and control solutions also includes occupancy sensors that integrate into Crestron building and environmental systems, as well as third-party partner room scheduling applications, which combine to keep track of how many people are in a room to comply with social distancing and room occupancy guidelines. If a space is over capacity, through XiO Cloud, the system will alert facility managers or IT leaders to rectify the situation. And while on the theme of integration, XiO Cloud also integrates to ServiceNow so that trouble tickets can be created and managed to ensure rooms are always 100% online and securely managed.

In closing, Bavolacco states: “The new work norm is most definitely one of a hybrid nature. Organizations that are adapting to the new norm need to provide the tools for collaboration to continue—both in-person and remotely—while maintaining a safe environment to work when in-person.”

Going forward, Bavolacco says this will be a learning experience, and organizations and employees will have to continuously learn and adapt to what works best for everyone.

Cold Storage Automation Assures Food Safety, Cuts Costs

Keeping food at the right temperature has always been a challenge, particularly for retailers with dozens of freezers and refrigerators in each of their stores. Most cold storage equipment simply isn’t up to the task, requiring managers to check their performance frequently.

As-a-service cold storage automation provides a timesaving solution. New technology gives retailers the shelf-level temperature readings they need to track equipment performance in real time, reduce labor costs, and simplify compliance with government regulations.

Three main drivers for automated temperature detection are compliance, cost, and quality.

  • Cost
    Manual checks drain resources, costing retailers as they pay their workers to perform the checks and deal with any maintenance issues they uncover.
  • Quality
    When a refrigeration unit malfunctions, there’s a good chance food will spoil by the time the problem is discovered. This spoilage results in waste—or worse, customers might purchase compromised products.
  • Compliance
    In many countries, governments have strict guidelines for how to maintain perishable food items, often with a narrow window of required temperatures. But with manual monitoring there’s ample room for error.

Cold storage as-a-service offers retailers the equipment, oversight, and support to tackle these issues. UST, a leader in digital transformation solutions, developed UST Cold Truth, an all-in-one solution that provides hands-off detection and triggers automatic alerts when temperatures stray outside prescribed limits.

Retail Cold Storage Automation at Work

Mydin, the largest grocery retailer in Malaysia, is just one example of the solution in action. The chain uses the solution to monitor refrigeration equipment in its flagship store in Penang.

Before deploying Cold Truth, store managers needed to monitor each refrigerator or freezer in person at least three times a day. These manual checks pulled them away from other duties and left temperature logs vulnerable to human error, putting the store at greater risk of overlooking a problem or failing an inspection.

To eliminate this time-consuming process, compact, low-cost sensors were placed on multiple shelves inside each unit. When a node registers a reading that’s out of range, the store manager receives an instant alert. Plus, Mydin can retrieve all past data from the cloud and send it to health inspectors and auditors as needed.

The retailer realized immediate benefits: Mydin expects the solution to reduce labor costs per store by about 19%. “From day one, you get that benefit because a manual task has become automated,” says Subho Bandyopadhyay, general manager of emerging digital technology solutions at UST.

And over time, the solution provides an accurate picture of asset health, helping companies like Mydin to assess the success of their equipment vendors.

“It helps the project management team, which is procuring billions of dollars of equipment from various manufacturers,” Bandyopadhyay continues. “Having these reports both factors into their buying decisions and determine the health of the machines.”

Because real-time monitoring allows store personnel to respond immediately if there’s a problem with equipment, Mydin also says the solution will reduce food spoilage by 36%.

.@USTGlobal Cold Truth makes cold-storage #automation more pervasive and accessible, bringing efficiency benefits to large #retail chains and smaller convenience stores alike. via @insightdottech

From Refrigeration Units to the Cloud

The solution includes battery-powered probes in each unit, ideally one at each shelf level. These sensors gather and sort the data—temperature readings over a Bluetooth Low Energy (BLE) or Wi-Fi at customizable intervals. The system gateway aggregates the data and sends it to a private or public cloud platform.

“The BLE gateway is important because Wi-Fi connectivity is often poor inside large retail stores,” says Bandyopadhyay. “Rather than trying to connect every node to the cloud, the gateway pulls the data from multiple pieces of equipment and pushes it out together.”

Once data reaches the cloud, managers can access it through a web dashboard or mobile application. Retailers can set the system to generate alerts when power fails or temperatures fluctuate outside a particular range—and those alerts can even be sent directly to a service desk, prompting technicians to address the issue.

The solution is offered as-a-service, bundling hardware and software services with installation, implementation, software integration, and ongoing support, lowering capital outlay.

Building for the Future

UST Cold Truth solution incorporates Intel® technology, providing a platform for today and new capabilities for tomorrow. “The Intel® NUC-based gateway provides optimal system performance plus the ability to add new functionality in the future,” says Bandyopadhyay. “For example, predictive machine maintenance is a direction we’re headed, offering a significant future-proof advantage by reducing the total cost of ownership.”

UST Cold Truth makes cold-storage automation more common and accessible, bringing efficiency benefits to large retail chains and smaller convenience stores alike.

“Every size retailer is within the scope of deployment, whether it’s a gas station with a few dozen refrigeration units or a larger grocery store with many more,” Bandyopadhyay says. Applications will reach far beyond grocery retail. For example, a dairy farm is currently testing the solution, and there are potential use cases in medical settings, property management, and more. The possibilities are truly endless.

Retail Analytics Transforms Data to Insights

Today’s digital signage goes well beyond just advertising, advanced business operations with data analytics. “Retailers tend to believe they know their business better than they actually do,” says Styrbjörn Torbacke, Head of iCity Services Europe with Advantech Co., Ltd., a global leader in IoT technology. And he’s right. Brick-and-mortar retailers can no longer afford risk in not understanding the ins and outs of their business. Because competition is cutthroat, and data-driven organizations have a big advantage.

Store analytics are now, more than ever, an indispensable part of a retailer’s operating strategy. The data from retail analytics helps provide insights to improve customer experience, develop smarter operating tactics, and increase revenue. And the right tech stack can mean the difference between surviving and thriving.

It has never been easier for the customer to vote with their feet and take their business somewhere else. Which is why it is important to remember satisfying customer needs and wants plays a key role in overall operations. It’s impossible to fulfill those needs and wants without gathering and analyzing the data that helps retailers get a better understanding of what makes shoppers tick.

Major retail businesses like Apple and Amazon can tap into advanced business intelligence. But most stores and the systems integrators that serve them fall behind when it comes to deploying and leveraging technology.

“What’s missing is what I call the democratization of data,” says Torbacke. “This is when data analysis doesn’t just reside at headquarters but is immediately deployed to the operational teams who can act on the information.” Because when stores have access to insights from the different types of retail analytics on-site and in hand, they can make better business decisions through a data-driven approach.

Advanced Analytics Help Break Down Retail  Silos

Using technology to collect data isn’t enough; retailers need to combine, compare, and correlate the data to move from information to insights that can improve operations.

“With just one set of data, your understanding is at risk,” says Torbacke. “It is when you start gathering data from multiple sources that you’re able to compare data sets, combine insights into meaningful clusters, and correlate events to get a deeper understanding.”

Take, for example, a store near a bus stop that doesn’t have a shelter. The retailer may get an influx of traffic when it rains, but the people may not buy anything. In this case, foot traffic data doesn’t provide enough information. A complete picture requires multiple data points from edge-to-cloud systems such as the Advantech UShop+ Store Business Intelligence Solution (Video 1).

Video 1. Comprehensive retail analytics help stores optimize store operations. (Source: UShop+)

If you sell just #hardware, it’s: “What’s your price and how fast can you deliver?” But offering #data insights puts SIs at the helm of value-generating activities. @AdvantechEurope via @insightdottech

IoT Devices Unlock Retail Insights

UShop+ integrates 2D and 3D video technology with POS transaction data.  The powerful solution was showcased at the 2024 Integrated Systems Europe event, one of the largest events for AI and systems integrators in the world.  A variety of sensors gather information into a central repository that can be deployed on-premises or as an Azure cloud-based solution.

Running on Intel® processor-based devices throughout the store, the system connects and extracts information from these and other data sources to provide a more complete picture. And to comply with privacy regulations, UShop+ can detect relevant shopper characteristics anonymously.

The software stack includes traffic analytics and heatmap technology that can provide retailers with an in-depth knowledge of consumer movement, shopping, and purchasing trends during periods of high or low traffic. This information and POS sales data can be combined to evaluate the store’s KPI performance in real time.

For example, key insights include zoning or dwelling. “Zoning shows the parts of the store where customers spend time,” says Torbacke. “Dwelling reveals how long people stay in a particular area before moving on to the next, and where they just walk through without stopping. This information can be used to optimize the display of goods or adjust pricing strategies.” 

Retail Data Insights in Action

A major European food retailer put UShop+ in place to help drive sales and improve operations. The solution uses a private cloud with edge servers in each of the stores and leverages the people-counting features of Advantech cameras.

The stores have also incorporated environmental monitoring sensors into the platform. “Air quality is a major concern in the area,” says Torbacke. “We’ve integrated Ushop+ into the building management systems and can rev up ventilation and fans when needed.”

The solution is also connected to interactive digital signage. “Based on the data gathered, signs can give specific messaging to customers to enhance the shopping experience,” says Torbacke. “And it can market certain promotions or products.”

Retail Data Analytics Opens Opportunities for SIs

Solutions like UShop+ provides systems integrators with opportunities to better serve their clients. While an SI’s business model is typically project based, the solution has the potential to provide a new stream of revenue with data analysis.

As a result, the discussion with the client becomes a different one. If you sell just hardware, it’s: “What’s your price and how fast can you deliver?” But offering data insights puts SIs at the helm of value-generating activities.

“The ability to drive insights positions the SI higher up the value chain than if they were simply installing digital shelf edge labels, for example,” says Torbacke. “Integrators who understand the value that software drives have an advantage.”

While shoppers vote with their feet, retailers aim for operational results. SIs that can guide clients on a transformational journey not only help them respond to customers’ needs. They set themselves a seat at the table where technology helps everyone thrive in the new retail landscape.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

This article was originally published on July 30, 2021.

AI-Powered Video Analytics Make Cities More Livable

Despite widespread use of video-based security, until recently its effectiveness has been hit or miss. For example, property owners usually count on receiving alerts when a person crosses a secured perimeter, but an animal could trip those sensors just as easily. As a result, operators often have to sort through false alarms—and manually search hours of recorded video—to find events that mattered.

But AI-based video analytics are changing everything. Using sophisticated deep-learning algorithms, these edge-to-cloud systems watch and analyze video, filter the noise, and inform operators in real time when action is needed.

As a result, security staff can be prepared and ready to respond to valid alerts as soon as they come in. And instead of spending hours “rounding” the premises, personnel can perform more valuable tasks—like interacting with customers.

But improved security is only one benefit of this new technology. By relying on edge processing and limiting the use of personal identifying information, its design also ensures responsible use of surveillance video.

Harnessing the Power of Video

The main advantage of AI-based video analytics over human viewing—and even traditional computer vision—is improved accuracy. A license plate recognition use case is illustrative: Numbers and characters might be vertical or horizontal, and colors, size, symbols, and more differ by state or country.

Traditional computer vision systems can’t surpass 40-50% accuracy in this situation, according to Prush Palanichamy, Vice President of Sales at Uncanny Vision—an AI video analytics solutions provider. And that kind of performance renders the system effectively useless.

Uncanny’s AI-based vision systems, on the other hand, are guaranteed to be 95% accurate. That’s when they really start to become useful, according to Navanee Sundaramoorthy, Co-founder of Uncanny Vision. But site-specific training can raise that figure even further—to 97 or 98%. “And then the system becomes so easy to use, and so effective, customers wouldn’t think of going back to earlier methods,” he says.

But this level of precision has nothing to do with collecting personal identifying information. On the contrary, the system doesn’t care about faces or license plate numbers. Images are rarely even stored, let alone sent to the cloud—which would be cost-prohibitive for both cities and businesses.

Instead, Uncanny converts video into metadata at the edge. Processing video locally means only a few kilobytes of data need to be sent to the cloud for analysis, which serves as the basis for automatically generated alarms. This way customers save money as well as the trouble of securing confidential information.

With @UncannyVisionAI’s help, SPs can turn a city’s existing #CCTV cameras into cost-effective vision #sensors, improving the operation of entertainment venues, office buildings, and transport hubs. via @insightdottech

Applications for AI-based Vision Are Endless

A customizable framework built with the Intel® OpenVINO Toolkit on small embedded processors at the edge makes that kind of processing possible. In fact, according to Sundaramoorthy, OpenVINO optimization improves performance by four to eight times. This means the system can process video at 30 to 40 frames per second instead of 10, and it can be used for a whole host of applications—like highway traffic monitoring—that wouldn’t be possible without such acceleration.

Those applications—big and small—are making public and private life better all the time. For example, some service providers (SP) are already using Uncanny’s solutions to provide additional value to cities on top of the high bandwidth connectivity they’re already delivering.

With Uncanny’s help, SPs can turn a city’s existing CCTV cameras into cost-effective vision sensors, whose insights can improve the operation—and beauty—of entertainment venues, office buildings, and transport hubs.

But even more critical, roadway safety can be increased—by using data on what’s happening here and now to plan traffic activities, instead of a 10-year-old traffic survey. Sundaramoorthy points out that real-time insights are especially important today, given how the pandemic has changed how we live, work, and travel. “Traffic patterns have changed so dramatically in so many cities,” he says, “historical data is practically useless.”

The only way to keep up is with a continuous, up-to-date view of all the relevant metrics: the number of vehicles, pedestrians, and cyclists on the road—as well as their location, direction, and speed. This was the motivation for Uncanny’s traffic analysis use case, which comes with a dashboard that tells cities everything they need to know.

And AI-based video analytics are keeping highways safe, too. For example, the U.S. Department of Transportation is using Uncanny’s solution to let truckers know where they can find available overnight parking spaces. So instead of stopping on the side of the road and jeopardizing their own and other drivers’ safety, they can pull over whenever they need to—without wasting time and gas looking for a spot.

New Opportunities for Systems Integrators

Uncanny makes it easy for cities and organizations to transform their security cameras into smart vision sensors. Typically, local systems integrators (SIs) install the system, and a successful deployment doesn’t require any special knowledge of AI.

“Anyone who knows how to install a CCTV camera can install our system,” says Sundaramoorthy. Uncanny works with various companies—including Lenovo, Asus, and Ingram Micro—which in turn load the Uncanny software onto their Intel® processor-based systems. All that remains is to connect the CCTV camera to one of these smart boxes.

That means this technology could be a boon for SIs, too. In addition to their domain expertise, now they can bring their customers a subscription-based, camera-plus-analytics service. This higher-value offering could serve as a new, steady revenue stream, raise SIs value to their customers, and help them transition to a more competitive business model.

The benefits of AI-based video analytics systems extend in every direction. Enhanced performance for the people who use them. Resource savings—time, money, and human capacity—for organizations. And better quality of life for the rest of us. All the hallmarks of a revolutionary technology we can’t afford to ignore.

IoT Systems Integrators Master New Markets

Best practices used to be a recipe for success, but clinging to them today could actually be a path to failure. To compete in today’s marketplace, businesses need to embrace change. The IoT has transformed the landscape by connecting operations and information technology, expanding the scope of what’s possible. Instead of looking for one-off systems, businesses need smart solutions.

Systems integrators could be missing out if they don’t take full advantage of the new opportunities IoT presents. To be relevant in the new marketplace, SIs must shed their traditional business models of selling, integrating, and servicing technical components, and address broader business challenges. They must become solution integrators.

Making this shift requires a “village” of vendors, products, solutions, and logistics. But most SIs don’t have the resources and infrastructure to deliver the transformational solutions their customers are looking for. Fortunately, they don’t have to go it alone. SIs can turn to solution aggregators to provide these capabilities at scale, helping them better serve their customers’ fast-changing business needs with innovative new technologies.

Aggregators that offer professional #logistics services can help SIs not only leverage new #technologies; they can take them into new geographic markets, expanding their businesses. @SYNNEX via @insightdottech

Aggregators are not simply distributors. These value-add partners offer SIs deep knowledge of the IoT space, assembling all of the necessary technology from a wide range of suppliers as well as a host of business services to meet an SI’s needs with proven, use-case-specific solutions.

“End users might not really know their pain points, or if they do, they might not know how to overcome it,” says Matthew Dyenson, Vice President, Product Management – Components & Storage, Synnex Technology International Corp., an Intel® Solutions Aggregator. “Integrators may have a lot of systems, but they may not fit the new market requirements. When it comes to IoT solutions that blur the boundaries between IT and operations, aggregators have strong global networks and can be the voice for the end user.”

Getting Help for Your IoT Projects

SIs can leverage aggregators’ vast knowledge of technology capabilities as well as niche expertise to expand their businesses, bidding on and winning new projects. For example, Synnex recently helped an SI replace and update outdated projector technology for a school district that wanted to better engage its students.

The previous technology required students and teachers to work on interactive whiteboards through touch or a stylus. But the solution was aging and limiting the district’s ability to prepare students for the modern world. The outdated technology was no longer covered under warranty, and projectors were either too dim or missing functional bulbs completely. Additionally, the technology’s touch and stylus capabilities were unreliable, causing teachers to struggle to retain students’ attention and interest.

Synnex collaborated with the SI to create a more modern and interactive solution. With new portable NEC projectors, teachers were no longer tethered to the interactive whiteboard and could better move and engage throughout the classroom. The brighter screens ensured students didn’t have trouble understanding the presentation and teachers could easily convey their messages.

The implementation of an administrator network-based control and asset management system allowed the district to view multiple projectors through a central console to detect early on if any the projectors needed a new bulb or additional maintenance. In the end, the district replaced and updated 90 projectors and added two portable projectors across four schools and within budget.

Pairing Solutions with Services

But the school solution is just one example. Synnex collaborates with SIs in a wide range of markets and industries (Video 1) such as healthcare, where it has helped automate the patient process, from registration to prescription fulfillment, and manufacturing, where it created scalable wireless architecture that helps managers make faster decisions. Synnex also offers deployment services to ensure solutions are delivered successfully and on time.

Video 1. Synnex works with partners and manufacturers to provide innovative technology solutions and services across a variety of markets and industries. (Source: Synnex)

Depending on an SI’s needs, providing market-ready solutions is only the beginning of what an aggregator can offer. Many also give access to business-growing tools, such as distribution, logistics, and financial support.

As a Preferred Global Service Project for Intel, Synnex ServiceSolv:

  • Performs site assessments
  • Configures, installs, and supports IoT devices
  • Offers on-site service capabilities in the United States and more than 180 countries
  • Adheres to OSHA as well as international and regional safety standards
  • Assigns a project manager to every service event
  • Schedules and coordinates against defined standard service or custom SOW
  • Provides certified engineers
  • Integrates with facilities that align with field installation for low-cost deployment
  • Reports on each step of the service process

“Synnex is able to provide an ecosystem of services to partners that are upstream and downstream,” says Dyenson. “Besides business engagement and financial support, we can become partners in warehousing and logistics.”

Aggregators that offer professional logistics services can help SIs not only leverage new technologies; they can take them into new geographic markets, expanding their businesses. For example, Synnex combines its core strengths to help its customers improve time to market and supply chain linkages.

Aggregators Stay on Top of IoT Innovations

To be strong partners, aggregators must stay on the cutting edge of technology. As the number-one Intel volume distributor in North America, Synnex has unique access to the very latest solutions—including a portfolio of Intel® Market Ready Solution (Intel® MRS) and Intel® RFP Ready Kits (Intel® RRKs).

These Intel-validated IoT Solutions provide the recipes to serve local market partners. By collaborating with local SIs and ISVs, all the existing solutions can be deconstructed, enhanced, and reconstructed to fit local regulations and answer specific challenges. And Synnex offers a broad portfolio of complementary products and services to helps SIs quickly ramp up to meet the needs of their customers.

“Synnex not only sells whole solutions; we can introduce new technologies and scenarios to the market,” says Dyenson. “We provide all of the necessary ingredients into a one-stop shopping experience for SIs who want to build the best solutions.”

Advanced Interconnects Yield Line Rate Speeds for 5G MEC

The holy grail for purveyors of live entertainment and sporting events is creating more immersive, virtual  experiences for audiences and fans. And 5G will play an important role in making it happen—but new network infrastructure will be required.

In many cases, this new infrastructure will be based on multiaccess edge computing (MEC), an ETSI standard that moves networking, compute, storage, security, and other resources closer to the edge. With 5G’s orders of magnitude more bandwidth, plus the ability to tap real-time radio access network information—MECs are poised to deliver lower-latency service than any equivalent degree of cloud processing.

The key advantage of an MEC architecture is that it removes edge applications’ reliance on distant clouds or data centers. In the process, it reduces latency, improves the performance of high-bandwidth applications, and minimizes data transmission costs. According to Benjamin Wang, Director of Marketing at AEWIN Technologies, a networking platforms provider, that’s not the full scope of the potential cost savings.

“MEC sits really close to the 5G base stations,” says Wang. “If you have a powerful-enough system, you can actually integrate the 5G DU (distributed unit) directly along with the MEC stack. That reduces the network infrastructure requirement, saving money for everybody.”

I/O Bound: Improving Interconnects for Edge Network Enablement

So what do these “powerful-enough” systems look like, and how will they differ from traditional networking equipment to meet the demands of 5G MEC?

With MEC equipment expected to deliver the capabilities of a small data center, multi-core multiprocessor systems like the AEWIN Technologies SCB-1932 network appliance are a must. The AEWIN appliance, which can pair with other platforms in a telco server rack, supports dual 3rd generation Intel® Xeon® Scalable processors (previously called Ice Lake-SP) with up to 40 CPU cores.

But with all of the data passing back and forth between 5G vRAN workloads or virtualized network functions like firewalls, aggregators, and routers on an MEC platform, the performance bottleneck usually isn’t processing performance. Often, it is the chip-to-chip and rack-to-rack interconnects used to shuttle information between processors that share virtualized workloads and/or execute interdependent processes and applications.

MECs are poised to deliver lower-latency service than any equivalent degree of #cloud processing. @IPC_aewin via @insightdottech

Until 2017, the Intel® QuickPath Interconnect (Intel® QPI) was used in multiprocessor systems to transfer data between sockets. But today’s 100 Gigabit per second Ethernet (GbE) and faster modules quickly overwhelm QPI’s available bandwidth. The result is systems that are “I/O bound,” where ingress data is squeezed by interface limitations before it can reach the processor, which adds latency.

In response, Intel processor microarchitecture equips an upgraded socket-to-socket interface called the Intel® Ultra-Path Interconnect (Intel® UPI). The low-latency, cache-coherent UPI nearly doubles the speed of QPI to up to 11.2 GTps, or a full-duplex bandwidth 41.6 GBps.

But Intel UPI solves only the chip-to-chip data transfer problem. It does not address shelf-to-shelf or rack-to-rack communications. That is the role of PCI Express (PCIe) 4.0, with its roughly 16 GTps throughput per lane. After putting the technology through its paces, Intel designed PCIe 4.0 into its chips starting in 2020, including the 3rd generation Xeon Scalable processors.

As a result, data in systems like the SCB-1932 can be channeled to any one of eight front access expansion bays for plug-in NVMe storage, application specific accelerators, or network expansion modules over PCIe Gen 4 (Figure 1). The platform’s eight PCIe 4.0 x8 lanes deliver data to these modules at a rate of 100 Gbps.

AEWIN PCI Express hardware
Figure 1. PCI Express Gen 4 support allows the AEWIN SCB-1932 to communicate with network expansion modules at speeds up to 100 Gbps. (Source: AEWIN Technologies)

These modern interfaces allow systems like the SCB-1932 to keep pace with the requirements of MEC deployments, delivering near-line-rate performance in demanding applications like security and video processing.

Keep the Data Flowing, From the Bottom Up

5G performance improvements aren’t just for your average consumer: Private businesses from stadiums to medical centers to manufacturing facilities are embracing these lower latencies to support new applications and services. Use cases include anything from high-speed video streaming to augmented and virtual reality to real-time biometric authentication. All of these make life more efficient and/or enjoyable, but all of them require new levels of edge network performance.

With MEC installations on the horizon, we can expect these and other new application classes to emerge so long as equipment like the SCB-1932 can keep data flowing at low latencies. That all starts with the most fundamental connectivity technologies—processor interfaces—and works its way up.

Machine Builders Unlock IIoT Sensor Data

Manufacturers of all kinds, from smartphone producers to those that assemble airplanes, all have at least one thing in common. They make huge investments in machines used to build their products. And there’s no question that uninterrupted production is priority number one.

But necessary predictive maintenance and real-time quality control are often out of reach due to lack of visibility into the inner workings of their complex machines.

Today’s IIoT technologies—AI, machine learning, and edge computing—enable factory operations to be proactive instead of reactive, solving problems before they lead to costly, unplanned shutdowns. That’s why machine builders design IIoT sensors and analytics software capabilities into their equipment.

CANCOM GmbH, a provider of digital transformation solutions and services, works with machine builders to make it possible.

“We help machine builders get data from sensors and analyze that data with their customers. We help them determine what kind of edge devices they need for real-time analytics and what can be processed in the cloud. This enables our customers to deliver a very good added-value service to their customers,” says Markus Fabritz, Digital Solution Sales Manager at CANCOM.

AI and ML Unlock a “Black Box”

A factory machine can contain dozens of sensors or more, which helps it operate at the right temperatures, detect excessive vibrations, and move in concert with other machines or robots. But much of the information these sensors collect remains locked inside the machines.

“Manufacturers may perhaps get data for five of a machine’s performance indicators. But without information from all the sensors, they never really gain transparency into operations,” says Fabritz. “The data that manufacturers do receive is often segregated on separate platforms and doesn’t arrive in time to prevent problems.”

For technicians, the situation is even worse. Because factory floors often lack internet access, they must rely on their eyes, ears, and experience to detect brewing problems in a busy, noisy setting.

CANCOM helps manufacturers solve these problems with the CANCOM Smart Product Solution—a preconfigured Intel® processor-based edge computing device. The company offers a broad range of customized services as well. “We offer a blueprint, an IoT architecture to make an individual solution using standardized modules,” Fabritz says. “The machine builder gets the service from us, so they only need to install this device in their equipment. The end customer just has to plug it into the internet, and then everything is set up.”

The CANCOM platform collects sensor data where manufacturers can view it at the edge and in the cloud through a single interface. AI and ML algorithms can provide operational status in real time. Glancing at the screen, machine operators can fix incorrect settings or tweak them to optimize performance.

Sensor Data Eases Predictive Maintenance

The experience of Austrian plastics recycler EREMA is one example of how the CANCOM Smart Product Solution helps the company better serve its manufacturing customers.

EREMA, a producer of recycling machines since the 1980s, wanted to improve its products but wasn’t sure where to start. The large, complex machines the company builds are outfitted with close to a hundred sensors that monitor the temperature of plastic heating, detect vibrations, and measure the speed, direction, and power needs of rotating parts. But because customers couldn’t easily collect and analyze this accumulation of information, they couldn’t use it to improve operations.

CANCOM worked with EREMA to connect sensor data from customer machines at the edge and in the edge. Today, on-site customer technicians can view and analyze this data, monitoring the condition of machines along every step of their multistage journey.

Alerts about failing parts and other emerging problems are sent to service technicians at the customer’s site and to the recycling machine builder. “If there’s a problem, EREMA can send spare parts right away, which can be installed before a breakdown,” Fabritz says. “This saves them considerable time and money.”

And by integrating their solution with machine builders’ information architecture, CANCOM can help make sense of sensor data, transforming machine builders’ customer operations and creating new business opportunities for machine builders themselves.

By producing better, more reliable machines, #machine builders can offer customers reliable uptime, #PredictiveMaintenance, and other services. @CANCOM_SE via @insightdottech

Sensors Track Tool Alignment

Not all factory machines include built-in sensors. In its role as a solution provider, CANCOM can also be beneficial for those that do not. Industrial toolmaker Marbach is one example. When one of its customers started to experience issues in plastic cap production, it was challenging to track down a reason. Because the tool itself was contained in a larger machine, it was impossible to see that it was out of alignment.

Working with CANCOM, Marbach installed a handful of sensors on the customer’s tools, connecting them to edge devices that automatically detect when the system is out of alignment. The smart system has both improved product quality and extended the life of the equipment. And by connecting sensor data to the cloud, technicians can monitor the machines remotely, performing fast troubleshooting and reducing downtime without having to go on-site.

Providing a Wealth of New Services

Manufacturers aren’t the only ones who gain valuable insights from solutions integrators like CANCOM. Machine builders analyze information from customer machines in bulk to help them improve future designs.

“Having sensor data from just one machine is like getting an operator with 15 years’ experience,” Fabritz says. “Imagine what you can do with data from 100 machines. Maybe you can build a machine that doesn’t break down anymore.”

By producing better, more reliable machines, machine builders can offer customers reliable uptime, predictive maintenance, and other services, providing the builders with new sources of income. Eventually, some may transition from selling their machines to leasing them—a model that is more profitable for them and also appeals to customers, who don’t have to tie up their capital in machine purchases.

“It’s a win-win,” Fabritz says. “And this is the vision of all machine builders.”

 

This article was edited by Georganne Benesch, Associate Editorial Director for insight.tech.

As the IoT Grows, an Automated Management Platform Emerges

Network management in the age of the IoT has become increasingly complicated and splintered. Different people manage different systems, using tools that don’t speak the same language. The fragmentation has made it difficult for managers to monitor performance and spot problems.

But with the latest unified management platforms, it’s possible to connect and control disparate networks with a single tool. The result? Companies can simplify network management, save money, and roll out new IoT products faster.

Unified IoT Network Management: Finding a Problem’s Root Cause

Today’s networks have become so large and varied that visualizing their structure is all but impossible.

“In a network, you have islands of specialists, or element managers,” says Robert Bschorr, Senior Technical Account Manager at IT services provider Infosim. “No one has an overview because a service or a connection is not built by a single system – usually, it’s the interaction of different systems. If you have a problem, you have to look into all of them.”

Network managers need to find the troubled network segment before troubleshooting can even begin. Most cloud services, including each of the hyperscaler cloud platforms, provide their own management tools. Network managers could run each of these tools, but given the enormous variety of networked devices, this process can take days or even longer.

#StableNet eliminates production bottlenecks, allowing network managers to get products out faster and make money for the business sooner. @infosimdotcom via @insightdottech

“You’re running a ‘zoo’ of different systems,” Bschorr says. And each system has its own user interface and reporting system, making it hard for managers to see the big picture and trace problems to their origin.

With its StableNet solution, Infosim replaces the “zoo” of monitoring and management tools with a single, highly automated platform that incorporates hardware and software from many systems and vendors, enabling managers to find and solve problems much faster (Video 1).

Video 1. The Infosim StableNet platform allows network managers to spot and categorize problems across the network. (Source: Infosim)

“There’s a very simple navigation view, so you can see, “Oh, there’s something wrong in that path,” and then drill down to the root problem,” Bschorr says.

A dashboard prioritizes problems and provides alerts. Managers can also monitor network performance, either as a whole or in segments determined by geography, function, or devices.

Saving Time and Money

Using multiple network-managing tools means organizations must train their network managers on all of them. Companies also incur capital expenses from buying and maintaining the tools’ supporting hardware and software. By bringing tools under one umbrella, StableNet greatly reduces these expenses.

For example, a global bank that adopted StableNet lowered the number of network management consoles from 17 to just four. The bank’s initial savings on training, licenses, maintenance, and infrastructure exceeded 4.5 million euros.

Managing Network Scripts

To manage automated processes, networks typically use software scripts, which are written in different computer languages and scattered in various silos. StableNet provides a single repository for scripts, making it easier for network engineers to compare and develop tools, policies, and workflows.

For example, most networks lack a common backup process. “With StableNet, you can collect all the backup configurations in one place and compare them,” Bschorr says. Engineers can select the configuration that works best for their application or use it as a basis to develop a new backup tool, which they can then add to the repository for others to use. The same process can be used for other kinds of automated scripts. “It’s like having a single automation control center,” Bschorr says.

Scaling and Customization of the IoT Network

A network management solution must be able to grow and scale with the business, and StableNet is designed to do that.

“StableNet is a system that scales horizontally,” says Infosim Director of Research David Hock. “It consists of a server, an agent component, and a database component. The server component can be one or more servers, so you can basically build a cluster to accommodate network growth.”

The system runs on Intel® IoT Gateway software, which provides the building blocks for circuit, cellular, and IP IoT infrastructure. Companies can scale their services by adding more gateways. One automobile manufacturer now has more than 25,000 network elements managed by StableNet.

Businesses often customize StableNet’s out-of-the-box solution to suit their individual requirements. If they want to make a relatively simple change—such as adding a new type of device that uses standard network protocols – they can do it themselves. For larger, more complex projects—such as implementing a new application-centric infrastructure – they can get help from StableNet or its partners.

Many companies work closely with StableNet regularly. The aforementioned auto manufacturer meets weekly with StableNet experts to discuss the current state of the network and receive guidance and tips. Interacting with customers also helps Infosim, which uses its knowledge of their needs to develop new StableNet capabilities.

A Corporate IoT Connector

The agility of IT environments has moved into the fast lane with the introduction of DevOps, which brings together software developers and computer operators to jointly achieve continuous improvement of IoT products through continuous deployment. For this system to work well, network operators must be able to keep up with fast-moving software developers.

“If the software guys roll out products at a very fast pace, and you cannot keep pace by configuring your network, there will be a production bottleneck,” Bschorr says.

By making configuration and connectivity easier, StableNet eliminates production bottlenecks, allowing network managers to get products out faster and make money for the business sooner. And by providing a common platform where all key business groups can collaborate, it helps companies build a brighter, more agile future.