What the New Intel® Xeon® Processors Mean for Your Industry

Intel® recently launched its new third generation Intel® Xeon® scalable processor—formerly known as Ice Lake. Its impact will be felt in fields from AI to security. What will that mean for you, your business, and your industry?

Kenton Williston, Editor-in-Chief of insight.tech, talks about the launch with tech expert Dr. Sally Eaves, and Yazz Krdzalic, Director of Marketing and Business Development at Trenton Systems. As well as taking a deep dive into Ice Lake, they’ll look at some of the surprising new ways Intel is changing its outlook on business collaboration, and what all this means for the IoT space.

Diving Into the Lake

Kenton Williston: Yazz, welcome to the show. So, what does Trenton Systems do?

Yazz Krdzalic: Trenton Systems makes trusted, cyber-secure, made-in-USA, high performance computing solutions—and that’s across the industrial, military, and commercial sectors. Our system supports critical IT infrastructures around the globe, and we help reduce latency and provide real-time insights at the Edge, while also securing sensitive and confidential information.

Kenton Williston: Sally, welcome to you as well. Can you tell our listeners a little bit about yourself?

Dr. Sally Eaves: I’m a Chief Technology Officer by background, and now mostly do advisory around emergent technology subjects—so, across cloud, cybersecurity, IoT, 5G, AI, and blockchain. I’m also passionate about tech as a force for good, so I have a nonprofit called Aspirational Futures. It’s very much around inclusion and diversity in tech and really building that out, as well as tech change.

Kenton Williston: I want to start by talking about that Ice Lake launch event. What was your biggest takeaway?

Dr. Sally Eaves: I think it was one of the best event presentations I’ve seen in some time. I love the fact there were many real-world, tangible examples in there, as well as research that was underpinning things. I really came up with four areas that came to the fore for me—built-in security, embedded AI, network optimization, but also that “tech as a force for good.” If have to pick one that brought these things together, I’d actually focus on that.

Kenton Williston: From the IoT perspective, AI was one of the things that really stood out to me—a huge, huge emphasis on how much additional AI performance is in these new Ice Lake chips compared to previous generations. And I’m curious what you think about why Intel has such an emphasis on AI.

Dr. Sally Eaves: I was really impressed with, for example, the improved AI-acceleration capabilities. It’s showing it’s a great alternative to GPUs, and other kinds of dedicated accelerators as well. I think something else that came to the fore for me was the fact that AI is bringing together influencing capabilities across simulation and also data-analytic workloads. I don’t think that gets enough attention sometimes. I think we sometimes look at AI only with attention to analytics.

The other thing that came to the fore was the Intel DL Boost facility. Again, this AI acceleration is so important for areas around, for example, 3D—for gaming, for different use cases across different tech trends. We’ve already seen examples where data scientists are using the new processor to help build out and deploy increasingly smarter models, and improving that rate to go from POC to production.

Industry 4.0

Kenton Williston: Yazz, Intel is reporting some very impressive gains in performance in these AI, deep learning, machine learning kinds of areas. Where do you see the big wins in terms of how this performance is going to help your customers, and things like industrial and other types of IoT applications?

Yazz Krdzalic: I’m honestly quite tempted to say—everywhere. Because when we think about deep learning and machine learning—efficiency is key. The reason really is that the better, the faster you can make predictions, the better the business outcome.

As for the way it will have the biggest impact in industrial or other IoT applications, I would really like to mention the fourth Industrial Revolution—or Industry 4.0, as we call it—that will benefit greatly from this performance boost. You start thinking about smart factories, smart sensors, IoT devices collecting lots and lots of data. And, as Sally mentioned, with AI acceleration already caked into it. So it’s truly a beautiful symphony. You’re receiving a lot of this data using these complex algorithms to make accurate, calculated decisions at record speeds.

Say you have a train going down the tracks carrying lots of precious, heavy cargo—that train eventually has to stop for a manual inspection, which takes lots of time, lots of resources—simply disrupts the flow. So now you put the 4K cameras, for example, on the side of those tracks with a ruggedized system, such as Trenton’s latest BAM Server using Intel’s third-gen Xeon CPUs.

Now you are able to snap images of the train as it passes by, never stopping, and this image recognition and analysis can quickly scan, calculate, analyze, predict, and make decisions without stopping the train if there is no need. So Intel’s latest CPUs can do this one-and-a-half-times faster, or almost 50% faster. I mean, how cool is that?

Hardening the System

Kenton Williston: The other thing that really stood out to me was the security features. I’d like to know more about what kind of security features your customers are asking for, and how the new features in Ice Lake might relate to that.

Yazz Krdzalic: How much time do we have to talk about security? Because everywhere you look, that seems to be a major topic now. Our customers are very concerned about it. And we’re not just talking software-level concern anymore—we have to look at the system and the environment holistically. I think a lot of people are referring to this now as confidential computing; we tend to refer to it as system hardening.

At the end of the day, our customers want to be protected from unwanted access to their data. And Intel security features such as TME, total memory encryption; SGX, software guard extensions; or PFR, platform firmware resilience, are just some of those key features that are truly enhancing system-wide security.

Kenton Williston: Sally, I know you were also pretty impressed by the security features.

Dr. Sally Eaves: I couldn’t agree more with the fact that security is front of mind. It doesn’t matter if you’re enterprise or SME. So embedding this in by design—that breeds trust. To an extent, it’s about these confidential computing and privacy-preserving techniques.

I think I’d also mention acceleration as well—Intel crypto acceleration—it reduces the impact of full data encryption, and it increases performance. So that’s hugely important as well for encrypted sensitive workloads.

Think “Speed”

Kenton Williston: Yazz, some of the things I think your customers would care a lot about would be some of the IO upgrades, like the upgraded PCIe. Can you tell me what really stood out to you on that front?

Yazz Krdzalic: These different applications that we’re thinking about in the future—they’re just requiring more and more performance, so having an increased core count does just that. And you mentioned PCIe gen-four support. So, one of our newest servers has 11 of these by 16 PCIe gen-four slots. That increases system-wide performance.

And Sally mentioned this—AI acceleration is already built in. The system supports Intel Optane persistent memory 200 series, which is the next-gen persistent memory. It increases agility and access to more data that’s closer to the CPU. Again, think speed here.

Kenton Williston: Over the last couple of years Intel has really been broadening its vision—not just thinking about itself as a chip company, but putting more and more investment into software frameworks. The frameworks that they built—called OpenVINO and oneAPI—are particularly important in actually enabling you to use these high-performance features. So I’m curious how the folks there at Trenton are actually using these frameworks, and how your customers are using these frameworks to take advantage of the next-generation features.

Yazz Krdzalic: The key takeaway here for me is that Intel is building the hardware and the tools for customers—OEMs such as Trenton, and the very engineers using it. So it just goes to show how meticulous Intel really is when it comes to building quality product, and that they’re dedicated to enabling engineers to get work done better, faster than yesterday.

It really spans across that entire ecosystem—from us as a manufacturer of these systems, to the customers that we sell them to. You have performance, security, and everything we’ve mentioned before, but you also now have a lot of these tools that enable you to do your job better, faster.

Tech for Good

Kenton Williston: At the end of the day, it’s the real-world benefits that matter—can you more readily catch a defect, or increase the security and safety or your system, or whatever it might be. Sally, that was something that stood out to me about this launch—the emphasis on real-world benefits. And I loved how Intel brought in all of its customers and partners. So I’m curious what you think about this different philosophical direction Intel is taking.

Dr. Sally Eaves: I love that there was a focus on supporting customers—reducing complexity. There was integration—bringing together software, silicon platforms, packaging, process-at-scale manufacturing, and really showing how to tackle the problems that organizations are suffering from. I love the fact that we were talking about how to support when we go into the performance stage of things.

It felt like a real development across the portfolio. It feels, again, more of a systems-level solution approach. So I think that’s very, very important to support organizations, support customers—reducing complexity.

And going back to “tech for good,” listening to community—there’s a sustained commitment there. I believe wholeheartedly that leadership in technology goes beyond the tech—it’s the people behind it, it’s what you apply the tech for. We’re seeing that it’s not a one-off, it’s not an add-on—it’s truly embedded, and it’s supporting projects. And again, it’s supporting collaborations as well—it’s bringing people together, it’s co-creating solutions, bringing technology and talent.

Kenton Williston: Yazz, you’re an Intel customer—are you seeing any difference in the experience in working with Intel?

Yazz Krdzalic: So, Intel over the years has invested so much, and it seems to have just doubled down on the strategy to improve the products, services, processes, and procedures, and the Intel brand. It truly feels that the mission has become a betterment of all, rather than just the company.

At Trenton, we’re currently working on the sixth early-access program to collaborate with Intel on the next-gen technologies, and we’re truly excited to be part of that story and effort, and look forward to supporting our customers with Intel’s latest and greatest for generations to come. And so Intel is also determined to make their partner ecosystem a collaborator in tomorrow’s technologies, and Trenton Systems is truly honored to be part of this mission.

The Future Is Integrated

Kenton Williston: I’d like to wrap up, Sally, with a question to you: What are some of the key trends we might want to keep an eye on as we’re moving forward?

Dr. Sally Eaves: I think number one is the rise of social business. I thought the launch was a huge statement around that, and embedding impact, embedding community, embedding collaboration at the very heart of digital transformation. I think we’re seeing a continued confluence, basically, across four key areas—hybrid cloud; AI; network and cloud; and 5G deployments.

And, again, this flexibility around performance is going to be hugely important going forward as these trends continue. Other areas to look at—the data center of the future looking incredibly different; storage and memory increasingly disaggregated; security being architected continually at the chip level and continually enhanced.

And this increased flexibility and this bringing together across hardware, software—this integration—and now obviously applications and services as well. So I’m really excited about where we’re going here, but the future is definitely integrated.

Private 5G Networks Drive IoT Innovation

Private 5G networks are arriving at a critical time, as digital transformation drives an ever-growing need for visibility and intelligence across the enterprise. But the path to private 5G rollouts can be bumpy. The primary challenge is ensuring high system reliability while simultaneously minimizing total cost of ownership (TCO).

In this article, I’ll describe a solution from Kontron that addresses these challenges, but first, let’s consider some of the driving factors behind the emergence of private cellular networks.

As telecom companies roll out 5G networks worldwide, sectors from manufacturing to transportation, communications, entertainment, and public safety are eager to develop applications taking advantage of its speed, capacity, and low latency. With organizations racing to develop solutions ahead of competitors, the 5G market is expected to eclipse $4 billion in annual spending in 2021 (Figure 1).

The 5G market is expected to eclipse $4 billion in annual spending in 2021.
Figure 1. The 5G market is expected to eclipse $4 billion in annual spending in 2021. (Source: The Fast Mode)

Growth of this solution segment will accelerate as 5G NR (New Radio) networks rapidly gain recognition as an all-inclusive critical communications platform. Industry standardization of essential features such as MCX (Mission-Critical PTT, Video & Data) services and URLCC (Ultra-Reliable Low-Latency Communications) will further enhance the value of these technologies for IoT applications.

Private 5G networks can do even more. With their high bandwidth, low latency, and excellent security, these networks are bringing the benefits of cellular infrastructure to a closed-circuit environment. The IoT applications for this new technology are virtually endless—and they include cutting-edge innovations like fully autonomous robots.

But the path ahead also contains stumbling blocks. Costs can be daunting. Network reliability can be a problem. Edge applications at industrial plants or in the field must be able to withstand harsh environments. And organizations must ensure that they’re keeping their own data—and that of their customers—private and secure at all times.

Fortunately, modern technology is creating solutions for these problems, with ruggedized hardware that operates anywhere and software platforms that support virtualization and containers, allowing companies to deliver distributed applications at reduced cost while improving security and data privacy.

Organizations that lack the time or expertise to cobble together a high-performance edge solution that meets industry standards can turn to @Kontron, which has developed a comprehensive package for #IoT users. @WindRiver

An All-in-One Edge Solution

Organizations that lack the time or expertise to cobble together a high-performance edge solution that meets industry standards can turn to Kontron, which has developed a comprehensive package for IoT users.

“Incorporating the insights of our teams and partners, we have developed a holistic solution that includes computer hardware, the operating system, virtualization layers, and sometimes even the application software from third parties,” says Wolfgang Huether, European OEM sales director for Kontron embedded computers GmbH.

The Kontron CG2400 compact server meets Network Equipment-Building System (NEBS) Level 3 requirements for delivering server-class performance in uncontrolled environments and conditions including high earthquake propensity and high ambient temperatures.

The server uses Intel® Xeon® processors for easy scaling and a Kontron motherboard with a life expectancy of more than five years. “Longevity is important to our customers,” Huether says. “It provides them substantial cost savings over the frequent re-qualification requirements of ever-changing IT-class equipment.”

The CG2400 systems come with built-in redundancy for fans and power supply units, a significant advantage for equipment deployed in far-flung locations or factories loath to cease production for maintenance and upgrades.

Flexibility and Scaling

Beneath its tough exterior, the Kontron device contains a highly flexible software platform. Working with embedded systems provider Wind River, Kontron engineers incorporated the StarlingX platform, which is supported by the OpenStack Foundation and facilitates the integration of new applications.

With StarlingX, companies can develop applications with the software of their choice, and deliver and manage them through containers, which provide for easy, centralized control and scaling. Latency is reduced, and the finer controls do a superior job of providing privacy as data flows through the network.

“StarlingX includes all the tools to manage and orchestrate the edge cloud and control your applications, and it also provides the means to perform firmware and security updates,” Huether says.

Monitoring and Maintenance

Edge monitoring is vital for keeping applications and services running continuously. Kontron’s solution gives managers good visibility into hardware and software performance.

“The system monitors health status, and if something fails, it sends an alarm that propagates through the software layers to inform the right people,” Huether says. “If a service technician needs to come to the site, they know upfront which parts were damaged and what to bring to fix the problem on the first visit.”

Improving Emergency Response

Among the many types of edge solutions users are first responders, who can better coordinate their operations with swift, reliable communications. In an emergency response, teamwork and efficiency are essential, whether the problem is local or widespread.

To promote cooperation, Airbus recently adopted a CG2400-based professional mobile radio application solution, which securely delivers information over large-scale networks to public safety officials around the world. The Airbus system serves 2 million mission-critical users in 80 countries, including first responders, healthcare workers, transportation officials, and military and police forces. Outside of emergencies, responders can use the solution to provide more reliable communications during regular operations.

Providing consistent communications isn’t easy for railroads, where trains constantly pass in and out of telecom network boundaries. But operators and conductors can obtain consistent mission-critical communications with Kontron General Mobile Radio Service. Similar to the Kontron emergency response communications system, it provides reliable voice and data transmission whether the train is in transit or chugging through a whistle-stop location off the beaten track.

Enabling an IoT World

So far, nascent 5G technology has provided only a modest bump in speed for consumer devices in the few areas where it has been deployed. But as telecom providers hand the keys of the IoT to factories, vehicle manufacturers, transit systems, hospitals, stadiums, and other large-scale users, revolutionary new applications—and the edge devices that support them – will soon start to prove their worth, Huether believes.

“5G infrastructure is a key enabler for the Internet of Things, and I see the edge server as a critical component of global IoT solutions,” Huether says.

SIs: Fast Track Your IoT Development Efforts

What does it mean to have an architectural approach to the Internet of Things? And how can systems integrators use it to help their customers accelerate digital transformation? Kenton Williston, Editor-in-Chief of insight.tech, talks to Savio Lee, Senior Business Leader for Digital Transformation at Ingram Micro Canada, to get the answers.

You’ll find out how to fill in the gaps in your technology stack, and dig in to the ways you can rapidly create proofs of concept—without blowing your budget.

Getting the Architectural Perspective

Kenton Williston: Savio, welcome to the show. What is Ingram Micro doing to help systems integrators through the digital transformation journey?

Savio Lee: So, at Ingram Micro we play the critical role of an IoT aggregator and an orchestration platform. If you look at IoT as a whole, it creates new opportunities, but with new opportunities also come new complexities. From an aggregation point of view, this role just aligns very well with our core function as a distributor, where we function as a one-stop shop for all the IoT needs from an ecosystem perspective.

We also function as an orchestration platform to unify and pre-integrate the ecosystem. And by doing so we are doing a lot of the heavy lifting, so that by the time it gets to our partners they can focus more on solutions and selling business outcomes.

Kenton Williston: I think what’s interesting about what I’m seeing Ingram Micro doing is that you’re really thinking about the approach to the Internet of Things from an architectural perspective. Can you give me a sense of what it means to have an architectural approach to IoT?

Savio Lee: If you ask a network vendor what IoT is, they’re going to tell you it’s all about the PoE switches. You ask a cloud company what IoT is, they’re going to tell you it’s all about a graph, right? I am not saying they are wrong. They are right. But the problem is each of them is approaching IoT from their own lens, and defining IoT in their own terms. And because of that, it created a lot of confusion in the market in terms of knowledge and skill sets.

What we saw was that we really need to take an architectural approach to IoT. And what that means is, not looking at IoT from a particular technology stack. Architectural means holistic, end-to-end, flexible, right? And also comprehensive. By taking an architectural approach, we start with trying to define what the challenges and objectives of the end customers are like.

And then from there we try to understand—okay, what are the current processes? What is the current system that’s in place? Because IoT is not always about rip and replace. A big part of IoT to achieve ROI is about working with the existing system. Where are the gaps? And what can we do to fill those gaps—but, most importantly, connect all the dots together.

Kenton Williston: So it sounds to me like a big part of this is that, if I’m thinking about this from the perspective of a systems integrator, I might have expertise in certain applications. I might have expertise in certain technologies. And what I can get out of working with Ingram Micro is sort of a partner in the space who can help me fill out all the rest of it, so that I don’t have to learn it all myself.

Savio Lee: Absolutely. We have all types of partners. We have partners that want end-to-end solutions from Ingram Micro that may be out-of-the-box, or it could be an integrated solution that is built by Ingram Micro that involves multiple vendors. We also have partners that are specialized in a certain level of the IoT technology stack, but they don’t have the complete picture. So our approach in terms of partnership—we are very flexible, and we recognize that there’s not really one-size-fits-all.

POC on Demand

Kenton Williston: One of the things that I think is really interesting about what digital transformation really means is this idea of very quickly trying ideas out and moving on if they don’t work—doing a lot of proofs of concept to see if a certain idea is even going to be feasible. But if I’m a systems integrator, my customers are going to want my assistance putting together these POCs. How am I going to be able to very rapidly respond to those demands?

Savio Lee: Some customers are looking for IoT solutions, but they don’t want a really permanently installed IoT solution. So what we have come up with is a program of what is called IoT on Demand, which is essentially a pre-built, custom-selected set of hardware for different use cases and verticals that customers, or our partners, can leverage to allow them to conduct POCs rapidly. And at the same time, if they to choose to buy it, because it’s designed to be flexible it allows the partners to easily repurpose and redeploy the solution and the infrastructure for different use cases.

Kenton Williston: And what about the financial elements of this? Are you doing anything particular to address that end of the challenge?

Savio Lee: We also have what is called a POC on Demand program, where you can turn any budget that you have for conducting POC into an OpEx model. We have a large pool of IoT devices—pre-built solutions that allow our systems integrators to conduct POCs rapidly without making significant investment. And when the POC is done, they simply have to return the equipment and everything to Ingram Micro.

Kenton Williston: This is kind of a win-win-win scenario: the end customer gets to explore all of these ideas very quickly; the systems integrator gets to assist them and help them arrive at a final architecture that they want to deploy without making a huge investment; and then the systems integrator in Ingram Micro itself gets the benefit of retaining that business over time. And once that proof of concept becomes a real, scalable, deployable sort of idea, then there’s that revenue stream available.

Savio Lee: Exactly. By taking away the need for owning hardware on a permanent basis, what that essentially does is it frees up the budget to conduct multiple POCs simultaneously, as opposed to just one POC simply because they have to buy the equipment.

Going the Last Mile

Kenton Williston: There’s a lot of glue that needs to be put in between to make all these parts work together, and to surface actionable information to the end customer. So is there anything that IoT systems integrators can do to make the process of bringing everything together simpler?

Savio Lee: One of the key requirements to building an IoT practice as a solutions integrator is that you must have software development resources or skill sets in-house as well, right? Which the traditional IT systems integrator does not possess. So as part of our architectural approach to IoT, we focus heavily on data, because we understand the value of IoT is in the data. And in order to mitigate the need to have software development in-house, we launched what is called Project Last Mile.

Project Last Mile essentially allows end customers, or our systems integrators, to easily and quickly build custom IoT applications on our IoT platform with little-to-no software coding required. And because our IoT software platform is microservices-driven and it’s modular in nature, we have gone a step further to pre-build templates for different use cases and verticals—different types of IoT solutions that our partners or systems integrators can simply download and customize the last 10%. We pre-build some of these templates is because, as important as, and as cool as IoT is, the other missing part of the equation is domain expertise.

And on the topic of domain expertise, we have what is called the IoT Co-Creation program. The IoT Co-Creation program is really our strategy of how we bring in domain expertise to build various applications, and help our systems integrators and their end customers derive additional value from the data.

The Holistic Approach

Kenton Williston: I’d like to dig a layer deeper, and talk about some of the specific elements of the technologies and expertise that need to go into overall architecture. Fundamentally, if you’re going to do anything in the IoT, it’s all about taking real-world data and digitizing it. You talked earlier on about this idea of orchestration. So what does that mean? And how does that relate to deploying IoT systems?

Savio Lee: We looked at IoT as a whole, and identified the areas where there are challenges that Ingram Micro could bring value to the table. The number one challenge is instrumentation of the physical work. As much as 70% of IoT projects fail simply because you cannot get data. That challenge is about reliable hardware and quality data.

The next piece is, with great complexity also comes lack of industry standards. So what we have is an orchestration platform that connects our ecosystem of hardware vendors together. And that itself allows us to address the challenges of lacking technology standards in terms of protocols, and allows us to normalize—to unify the data, and send it to any applications, or to our own applications. Our orchestration platform unifies the ecosystem to create one common data layer.

The next thing is we also have what is called a plug-and-play approach, which allows end customers and systems integrators to easily onboard IoT devices by simply scanning the QR code. So no longer do you have to go find and upload a serial number and type it in, which can be time consuming and error prone as well.

Learn as You Grow

Kenton Williston: I’d love to hear your thoughts on the step-by-step approach that systems integrators can take to expand their capabilities—expand the opportunities they can address without biting off too much at one time.

Savio Lee: The quickest route to market is really understanding that the customer wants end-to-end solutions. And the good thing is, we have pre-built, out-of-the-box solutions that allow you to easily sell to the customers—allow them to try it, along with our various innovative services and programs to help support you all the way from the sales to implementation and post-sales.

Our approach is really a learn-as-you-grow type of thing. Over time—hopefully with our system integrated—they will realize that once they understand what is required to succeed in IoT, then they can decide what role they want to play in IoT. Do they want to focus on—if it’s an IT systems integrator—do they want to just focus on IT, and be the IT of IoT by focusing on the network and infrastructure for IoT?

Or do they want to be a solutions integrator over time by building additional—and hiring additional—resources in-house, such as op software development, and so on? And there are going to be guys who say, “You know what? We don’t want to be the solutions integrators; we want to focus on the data because that’s ultimately where the value of IoT is.” And they want to build a data practice by helping the customer derive additional value from the data, and to drive organizational change in efficiency.

Kenton Williston: Any questions that you wish I had asked you?

Savio Lee: I think one of the key messages I want to get out there is that—to our systems integrators—IoT is a journey and not an overnight thing, right? Because we have created that ecosystem, done the job of unifying the ecosystem, innovating at every layer of the IoT technology stack along with the capabilities for our IoT applications.

What that means is that by partnering with Ingram Micro, we have essentially fast-tracked your IoT practice by a minimum of two to three years. And the best part is, you can start your IoT practice with little-to-no investment because of our high-touch, white-glove approach. We’re actively out there doing a lot of course selling, engaging any IoT opportunities that you may come across, so that you are really learning as you grow.

Cut Data Center Costs with a Cloud-Native Database

Cloud-native technologies have revolutionized the data center, offering dramatic advantages in performance, resilience, and cost. But for many organizations, the path to a fully cloud-native, software-defined data center (SDDC) has been blocked by vendor lock-in.

This issue has been particularly pernicious for the relational database management system (RDBMS). Some popular databases are stuck in a pre-cloud mindset, making them inflexible and costly. But migrating away from these databases is tricky, because they are the foundation of mission-critical operations.

That’s why TmaxSoft, a provider of IT modernizations solutions, decided to create a new approach. Its cloud-native Tibero RDBMS was designed specifically to offer a smooth migration path by offering compatibility with legacy databases along with robust migration tools.

Whatever your legacy database, @TmaxSoft Tibero delivers the increased performance, reduced cost, and enhanced data resilience enterprises seek from a truly cloud-based virtual database.

The result is a dramatic increase in flexibility while cutting TCO. Indeed, organizations as diverse as Kia Motors, the Government of Gujarat, and Brazilian pension fund FUNCEF have already achieved up to 90% cost reduction in database licensing and maintenance year over year.

Time for a Cloud-Native Virtual Database Platform

To see how such results are possible, let us consider what is required of a truly virtual, cloud-native RDBMS. As with all cloud computing initiatives, it will be critical to migrate from a current database to this virtual one gradually and under complete control. This necessitates a hybrid capable of running on both enterprise premises and the cloud at the same time during the transition process.

There are a few other key requirements for development of such a platform:

  • It needs to be highly compatible with popular databases to be a simple drop-in replacement for legacy systems.
  • It needs to guarantee application compatibility without modification.
  • Security needs to be as robust if not more so than the legacy systems being replaced.
  • It needs to accommodate the large data lakes and data warehouses that many enterprises depend on.
  • It needs to be multi-threaded to maximize performance.
  • To take full advantage of cloud architecture, it should feature active clustering technology to optimize reliability.
  • It should offer a substantially lower total cost of ownership (TCO) than legacy systems.

All in a Completely Open Platform

Setting it ahead of its competitors, Tibero from TmaxSoft is a database platform that meets all of these requirements (Figure 1). Designed to fully exploit all advantages of the cloud, Tibero also runs locally so enterprises can easily and gradually plan and migrate from one to the other without business disruption or major code changes.

Tibero features include multi-threaded working processes; shared memory with database and redo log buffers; and a monitor process.
Figure 1. Tibero is a virtual database designed to take full advantage of cloud architectures (Source: TmaxSoft)

Unlike competing platforms, Tibero is a fully open system. This makes it compatible with a wide variety of popular database utilities and middleware.

It also fits with the tools that IT and DevOps teams like to use for their greater agility and flexibility—both in development and deployment. This helps avoid undesirable vendor lock-in, which prevents enterprises from using software produced by anyone other than the database vendor. Not only does this reduce costs, it enables enterprises to benefit from the broad variety of open-systems software available to it.

In addition, Tibero allows IT teams to maintain control of data security by limiting access to specific individuals or roles, selectively allowing them to use only specific data within a database. Plus, Tibero provides excellent support for country-specific encryption algorithms—making it an ideal global solution.

Providing Stable RDBMS Service in the Cloud

To provide constant, consistent performance without downtime, even during component failures, Tibero relies on its exclusive “sharing-everything architecture”: Tibero Active Clustering (TAC).

Multiple instances of TAC take advantage of the basic nature of cloud computing as they combine their processing power to increase performance. Since database processing is clustered, operations continue even when a node fails thanks to seamless, instantaneous failover capabilities.

Failover capabilities are extended by Tibero Standby Cluster (TSC), which immediately uses “redo logs” to replicate the exact structure of the primary TAC cluster. In case of site failure, TSC takes over and continues delivering database services without interruption.

Since the performance of any RDBMS is directly impacted by the quality of the storage it uses, Tibero Active Storage (TAS) is the file system and volume manager that features a clustering function that directly uses TAC when using shared disk space. With two-way and three-way mirroring for superior data protection, TAS striping reduces I/O latency by load balancing across all disks.

To strengthen data protection, Tibero features a fully integrated Recovery Manager (RMGR) that provides full and incremental online backup with automatic recovery. Multi-node Parallel Recovery accelerates crash, instance, and media recovery by performing simultaneously on all nodes.

The Importance of Smooth Cloud Migration

Early cloud adopters were often confronted with errors and other challenges caused by trying to migrate to cloud too quickly and without sufficient planning. With Tibero, enterprises can operate databases both locally and in the cloud, enabling them to carefully plan and migrate gradually under full control.

Tibero also provides a complete set of fully integrated tools that enable easy and reliable migration. These tools, all part of the T-Up Suite, perform all required analysis, migration, and verification functions that comprise an effective transition. T-Up Analyzer is used prior to migration to assess the compatibility of various workloads, providing a comprehensive report defining the scope of the migration and indicating potential compatibility risks. T-Up Migrator simplifies and automates the creation of target objects and the transfer of data.

DBAs Embrace Tibero SysMaster DB

In addition to developers and operators, database administrators (DBA) appreciate the power and flexibility that Tibero delivers. Typically, DBAs manage multiple databases and prefer a single platform to monitor and manage all of them. With the Tibero SysMaster DB, DBAs can effectively manage the operation and configuration of users, schema, storage facilities, and servers. Multiple dashboards available in the platform enable them to visualize related information across databases. This is the preferred platform for real-time analysis in a single view.

Completely Compatible and Cloud-Ready

Tibero is cloud-delivered and features a simple licensing model similar to other Software-as-a-Service (SaaS) subscription services, eliminating the need for installation or configuration—making deployment fast and easy.

Whatever your legacy database, Tibero delivers the increased performance, reduced cost, and enhanced data resilience enterprises seek from a truly cloud-based virtual database. As a result, you can migrate with confidence and use Tibero with your preferred open-systems tools.

Is Time-Series AI the Next Megatrend for Industry 4.0?

Dr. Felizitas Heilmann, time series ai

Think machine vision is powerful? Wait until you see what time-series AI can do! Ideal for analyzing non-optical data such as voltage and temperature signals, time-series AI is poised to play an outsize role in the factory of the future.

In this podcast, we explore the possibilities with Dr. Felizitas Heilmann, Head of Product Line for Embedded Industrial Computing at Siemens. As one of the biggest players in industrial automation, Siemens has already built considerable expertise in this burgeoning technology. Join us as we discuss:

  • The most promising applications for time-series AI
  • How to deploy this new technology into legacy infrastructure
  • How systems integrators and machine builders can successfully deploy time-series AI

Related Content

To learn more about machine vision, read Q&A: Is Time-Series AI the Next Megatrend for Industry 4.0?. For the latest innovations from Siemens follow them on Twitter at @SiemensIndustry.

Apple Podcasts  Spotify  Google Podcasts  

Transcript

Kenton Williston: Welcome to the IoT Chat, where industry experts examine the technology and business trends that matter for IoT developers, systems integrators, and end users. I’m Kenton Williston, the Editor-in-Chief of insight.tech. Today I’m talking trends in industrial AI for 2021 and beyond with Dr. Felizitas Heimann, Head of Product Line for Embedded Industrial Computing at Siemens. Among other things, we’ll explain why time series AI will play an outsized role in the factory of the future. And we’ll look at the ways systems integrators and machine builders can set themselves up for success in the increasingly complex world of AI, deep learning, and machine learning. There is so much to discuss, so let’s get to it. So, Felizitas, welcome to the show.

Dr. Felizitas Heimann: Hello, Kenton. It’s a pleasure.

Kenton Williston: So, tell me a little bit about your role at Siemens.

Dr. Felizitas Heimann: Well, I am responsible for our product line of embedded industrial computing, which means basically our box-and-panel portfolio. That means leading the product management, and also leading the technical project management, which is a very nice combination of responsibilities because there’s so much you can do and decide on and execute. So it’s really fun, and it’s currently an amazing situation to be in, in factory automation, because there’s so much going on.

Kenton Williston: That’s great. And with that, I’d love to just jump right into the topic of today’s conversation—because there is so much to talk about—and talk about what’s happening in AI. So, what do you see as the biggest trends in shop floor AI for 2021?

Dr. Felizitas Heimann: So, when thinking about the current state of industrial AI, I’m thinking of a hype cycle. I think about it in a positive way, because the best part of a hype cycle is always the valley of disillusionment, where, constantly, real applications are starting to pop up and be spread and multiplied—that’s definitely the sweet spot of any innovation coming through. But you were asking about the nearby trends, and I see three of them in the near future. First, concerning applications and use cases. I mean, what is closest to being applied on a big scale is probably AI-assisted machine vision—for example, for optical quality inspection—and that’s for several reasons. AI-based image analysis is already quite well understood, it’s comparatively easy to apply, and the quality of results is easier to control than for other AI applications. As you’re dealing with one kind of data source only, especially in industrial environments, you cannot allow for any unnoticed errors.

So being able to control the quality of your tool is a crucial feature. And that’s why I assume here we’re going to see the fastest progress in terms of spreading applications on a large scale. Second, progress in hardware. We’ll see more and more dedicated or adapted shop floor AI hardware appearing—mainly PC-based, as almost all industrial microprocessor companies have started to offer AI accelerators that are now getting applied in devices like industrial PCs. And that’s one important step to enable the deployment and the application of AI models on the shop floor—also for brownfield applications.

And third—and this is a very important one—shop floor computing architecture. We’re seeing IT/OT architectures at scale recently, where the most prominent example may be industrial Edge computing. That means connecting the shop through assets to central management; providing additional computing power resources; but, compared to cloud computing, keeping as much performance as necessary on the Edge—so, meaning close to the machine or the line. And due to the nature of how AI works, this is a mandatory prerequisite. So these would be the biggest trends that I would see in the very near future, where AI will be supported and will start to make progress.

Kenton Williston: So, I want to come back to one of the first things you said, which was about the types of AI that are being deployed right now. So, like you said, it’s really the visual AI that has taken off. And I think there’s something of an irony there in that, from an office perspective, you might think that vision systems would be some of the most complicated and difficult to deploy, because vision is such a complicated process from a human being’s perspective. But, from a mathematical perspective, in some ways it’s one of the easier sorts of AI, because it’s relatively straightforward, like you said—it’s a uniform kind of data that you’re dealing with.

And, I think, importantly, it’s something that, although it’s a complex process, humans intuitively understand vision. And there’s many other kinds of AI, and particularly time series AI, that I think are maybe not as intuitive to judge the results, but I think are very important. And I want to talk a little bit about that today. So can you tell me a little bit about what time series AI is? And why you think it’s going to play such a large role in the smart factory.

Dr. Felizitas Heimann: I can, and you put it totally right. Especially in AI, there is a phenomenon: what looks easy is the most complicated thing; what looks complicated in the beginning, like machine vision, is eventually even the easier thing. Why I believe that time series is actually the most rewarding use case of AI in the long term is the ubiquitous availability of the data available, and the massive unused potential to you to gather insights out of it—I mean, non-optical data are everywhere. In a line you have continuous inflow of data from different sources. For example, you can think of electrical data like current voltage spikes, temperature data, mechanical-like vibration, expansion, flow speed, pressure. There’s a massive unused potential in using these data.

Kenton Williston: Yeah, absolutely. And something that I actually have some experience with myself, and at much earlier, earlier stages of my career (I hate to think about its numbering in decades now, how long ago this was). But there’s so many different kinds of systems—whether the mechanical systems, chemical systems, whatever—where there’s just a lot of action happening that can’t really be evaluated by vision systems. So, reflecting on some of my ancient experience—for example, I used to work at an engineering firm that designed petrochemical plants, and so there’d be all sorts of chemistry happening and different flows happening, different pipes that needed to be measured, what kind of temperature and velocity, and is there cavitation happening—or any of these sort of considerations.

And, at the time—and this is, again, getting to be decades ago—everything was done in a very simplistic fashion. It was obvious to me then, and even more obvious to me now, that AI is a little bit more of a viable concept that, with some more advanced algorithms, the efficiency of the plant could be improved, the likelihoods of failures could be reduced, and so forth. So, could you highlight for me some of the examples that you have in mind when you think of where time series AI could be applied?

Dr. Felizitas Heimann: Yeah. We have a great example in one of our own plants, and that makes it of course easier to talk about it as well. One example of one of our own manufacturing sites: out of a data analysis of a whole PCB assembly line, the algorithm that had been developed has been able to judge, with which confidence an extra X-ray analysis of a PCB board is necessary, or not, in terms of quality control. With this AI solution, the purchasing of an additional X-ray could be avoided, so saving several hundred kilo euros while maintaining the same level of quality. There are many examples that are going in this direction. The main challenge for the time series application is: the more complex the input gets, the less easy it becomes to understand the model and the dependencies. So, like you were saying with your petrochemical facility, you can have a vast amount of sensors, so it becomes more important to test the model, and also to permanently monitor its KPIs if the model is still acting as it is supposed to be.

Kenton Williston: Yeah, for sure. And I think one of the big challenges there, is data that’s maybe not so intuitively understandable to a human being, unlike an image or a video. But it’s also many different sources of data, that you might want to combine to make some assessment—like vibration in the machinery or the electrical load, or all kinds of different things you might want to monitor simultaneously. And reaching an intuitive understanding of: what do all these things combined mean? is quite difficult. So it’s a perfect opportunity to ask a machine to do the job. But it also means that it’s something that’s not immediately obvious how you would do that.

Dr. Felizitas Heimann: This will require a long way to go. Honestly, there is a rough road ahead of anyone who intends to apply that.

Kenton Williston: And I think also it brings to mind, to me, something we’ve talked a lot about on the insight.tech program, which is just the generational shifts. So, I know here in the US, and there in Germany as well, there’s a large demographic bubble happening, where this baby boomer–sort of generation is very quickly retiring. So you have a lot of human expertise that’s very quickly leaving all sorts of industries, and a lot of accumulated wisdom as to how the different equipment in the plant operates and how to understand when there are problems, just by years and years and years of experience. So I think it’s pretty essential to start making some AI models that can capture some of this expertise in a way that will allow the next generation of the workforce to be successful.

Dr. Felizitas Heimann: That’s, of course, true on the one side. On the other hand, I strongly believe that humans will be required in the future as well, and experienced humans who know what the models are doing, and are not just handling black boxes but have a deep understanding of what’s actually going on in their line to be able to judge.

Kenton Williston: For sure. And that leads me to another topic I wanted to discuss with you, which is on this subject of having human beings who understand what’s happening. I think one of the other big challenges that I see—it’s both a challenge and, I suppose, a benefit of the way things are evolving—is there’s this huge trend towards IT/OT convergence, and I think that the benefits of this are pretty powerful from the enterprise side. For example, you can gain so much better visibility and understanding of what’s happening in your facilities. And from the OT side you can get access to just these amazing resources—the data centers and clouds and things like that. But it’s challenging, I think, for this convergence to happen, because you do have two worlds coming together that don’t necessarily understand each other very well, and then you add AI into the mix and it gets really complicated. So how do you see this IT/OT convergence factoring into the evolving role of AI?

Dr. Felizitas Heimann: So, I would like to narrow down the benefit of IT technologies in the OT world to just a simple statement, and that’s: increase options while improving resource efficiency. And, as you have just mentioned, both sides have a tremendous potential in the combination of both. And if we’re bringing that back to our AI topic, especially for a neural network AI application, you have to differentiate between two kinds of activities. Namely, training a model with historical data, where the result is already known; and , which means using the model to judge new data sets. And the training accounts for a small percentage of the overall usage time of the model only, but during that limited time it requires orders of magnitude higher computing resources. The inference, on the other hand, is running permanently with a much less, but continuous, performance requirement. So you need to calculate this continuous computing workload for inference, and you run it on a computing process close to where the data are generated.

And, additionally, you plan model training and retraining effort. And you want to assign this either to an on-premise server or somewhere in the IT department, or completely off-site on a cloud server, for example. So one of the core elements of IT/OT convergence from that standpoint is the industrial Edge management—as its core domain is to connect these two layers of computing availability, and making them convenient to use for an industrial automation engineer in a way that he or she does not have to be an IT expert for it. So, otherwise cost-efficient deployment of AI would not be reasonably possible. But that’s only one part of the story.

The IT/OT integration, and especially the industrial Edge, additionally also provides the option of new business models. For example, as a machine builder, as a systems integrator, you might want to offer predictive maintenance of the machines or lines you have sold anywhere in the world, and you might want to use AI on that device to support the customer with information. The industrial Edge device then supports you or your customer to collect and pre-process the necessary data for that, close to the asset that you want to monitor, and helps to transmit it to your central system. So all use cases like this are based on the industrialized availability of IT technologies, and especially convenient and secure availability.

Kenton Williston: Yeah, absolutely. And, of course, one of the things that I think is really important here is what you mentioned earlier in your introduction—how much the computing hardware has changed. And part of what’s made all of this possible is the fact that IT-style compute hardware has become so much more powerful—is having so much more built-in AI capability. So I’d love to hear a little bit more about how you think these sorts of advances in industrial computing hardware have changed the AI landscape.

Dr. Felizitas Heimann: To answer that, maybe we have to jump in a little bit into what AI is about in terms of computing. And due to the nature of the mathematical model of the neural network computing behind it, it’s all about parallelization, computing speed, and accuracy. And for the parallelization, you’re talking about the numbers of computing cores. And for the computing accuracy, you need to consider which kinds of data you need to handle. , because vision usually works with integer values, because the pixel input from the image is usually normalized in the same manner by the RGB value spectrum.

So all calculations can happen in the integer range and can be ideally processed. We have the high number of compute cores of a GPU. Time series, instead, can come from different data, from different sensor input, that at one point in time can have a dynamic range that differs from the range in the next period of time. Also, due to these different ranges of accuracy that the individual sensors provide, your pre-processing and your normalizing efforts get much more. So with GPU computing technology, you will probably not be able to handle that at all times. For multisensory time series applications, on the other hand, data input means handling dynamic and different data ranges.

So all input data need to be permanently normalized as a pre-processing step, making the inference calculation much more complex—to handle the accuracy integer values might not be sufficient. So CPU-based, multi-core computing resources are what you will go for for these kinds of use cases—you will find them in industrial computers. To scale efficiently, you need to choose your hardware based on two factors: the performance required per single inference, and the necessary number of inferences in a given time. So, coming back to your initial question of what has changed in industrial computing hardware, to choose the right computing hardware you’d want to look into the number of sensors, data rates, sound rate, the normalization technique, and the algorithm to be used, and with which KPIs you intend to control the quality of the model.

To give you a feeling of today’s industrial PC range—our IPC range is currently from dual core Intel Atom up to dual socket 28 core Xeon CPU processor. And our fanless IPCs can be mostly equipped with a small AI accelerator code. And the largest racks can support up to two NVIDIA, like P5 cells and GPU cards. And we are continuously increasing our portfolio to meet the rising demand of more and more AI-specific hardware.

Kenton Williston: That’s great. And, of course, it’s not just the compute hardware that I think is changing and making new possibilities happen, but there’s all sorts of things happening in the networking space as well. It’s been interesting over the last couple of years to just watch this transformation. Again, I think back to the old days from my own experience, and how really isolated any kind of automation might be. I mean, I started my career back in the PLC days, where programming looked more like circuitry. So it was kind of primitive, but the communication between the systems was even more primitive—if it even existed—and you’d have all these different network standards, and it was very expensive to connect things together. And of course that’s changing a lot now—there is a move towards more standardized kinds of networking with technologies like TSN. And then, of course, there’s a lot of excitement, and I think it’s merited, around 5G. So I’m curious what you see as the potential for these new network technologies as we move forward.

Dr. Felizitas Heimann: What we’d expect from 5G on the shop floor in the next period of time? High bandwidth, wireless communication, high local availability, high flexibility. And if I project to what to use that for in the beginning—especially as there are still a lot of brownfield applications—if you think about application of AI, the local 5G will be a quite nice way to equip brownfield—so to connect assets and sensors to an industrial IoT environment.

Kenton Williston: Yeah, absolutely. And I think one of the things that makes that really great is when we think about AI, and you mentioned earlier the importance of having really good compute capabilities at the Edge, because you need to get close to these data sources. One of the reasons for that is because there can be an incredible amount of data that you want to analyze, and that’s particularly true in systems that are driven by visual data. But then when we start thinking about time series AI in particular, there’s many, many examples where the bandwidth is quite a bit lower, and it’s like we’ve been discussing—more a matter that you want to collect a lot of data from different sources and/or you might be able to use a single stack of compute hardware to run analytics for many different pieces of equipment on the shop floor. And I think, in this case, the technologies like 5G will be extremely useful for enabling you to do some sort of a centralized influencing. What do you think about that?

Dr. Felizitas Heimann: Exactly. And there also the great benefit of TSN will come into place when, especially, you want to connect time-sensitive data that was in your network here without a big setup effort. And translated to AI, again, this could be the case when the data inflow to your inference model comes from these distributed sources and requires a high position and reliability on the timing. So I believe both technologies will broaden the opportunities a lot. You have to set up a reliable, real-time processability of data for AI applications.

Kenton Williston: I’m glad you mentioned that real-time aspect—something really critical because, of course, it’s not just a matter of making some conclusions about the data that you’re seeing, but also taking some immediate action in many cases—for example shutting down some machinery before damage occurs. So having that real-time responsiveness is quite critical. And you mentioned the different kinds of compute hardware that Siemens offers and, of course, one of the things that’s been pretty cool here recently—the so-called Elkhart Lake product launch. The Atom 6000 series has some of these really important real-time capabilities built right into it, which really helps support some of these use cases. In fact, we’ve got on the insight.tech webpage—we’ve got a really nice presentation you did at  about this platform and what it can do in the Siemens industrial PC platforms.

Dr. Felizitas Heimann: Yeah. Probably it’s quite easy to hear from that presentation that it’s going to be one of my favorite devices in the future. In any case, our upcoming generation of IPCs will feature incredible opportunities also, especially for AI applications—for any other, of course, as well. But as we’re talking about AI today, we’re doing a lot to have systems that are really matching the use case, considering it from a customer perspective. Especially the Elkhart Lake platform will be really cool.

Kenton Williston: So, as cool as that is, all of this great compute hardware doesn’t really mean very much until you can actually program it and make use of it. And I’d love to hear your perspective on the kinds of platforms and tools systems integrators and machine builders need, to be able to successfully create and deploy time series AI.

Dr. Felizitas Heimann: I have a personal opinion about tools, and that comes from experience. For any tool you have garbage in, garbage out, and this is also true for AI models. So the key initial thing to do right or wrong is the data preparation in the beginning before thinking about any tool, and also to be able to think from the perspective of what the training process will do with your data. And there’s a small anecdote of a true example which highlights that quite well. We had a case where the model capability seemed excellent—100% discovered good from bad training pictures. Unfortunately it could not at all reproduce this in real operation. So, what had happened? All the good sample pictures had been taken at the same time, and—especially—at the same light settings; and the bad samples had been taken any time, under different optical conditions. So, in the end, the machine learning model had not trained itself to identify the defects, but to identify if the light settings were the ones that apparently had resulted in good or bad quality.

So, probably everyone who starts working with this matter falls into these traps at first. And it reminds me very much of the early days of numerical simulation—where everyone was quickly able to generate colorful, seemingly reliable, but totally misleading results. So to be successful is to know what you do, and what you can possibly expect. And for that, the most important tool—we talked about it already—are humans. And for a systems integrator, I would definitely recommend to develop industrial application engineers coming from the automation domain who are familiar with OT requirements, and who want to adopt the new technology, and this is no matter of age.

Kenton Williston: Yeah, absolutely. People who really understand, sort of on an intuitive level, how these systems work. It’s really great, because, like we talked about with vision systems, that’s something that, although there are pitfalls—like you mentioned with the lighting, which is a pretty funny example—it’s sort of intuitive to a human being what the machine is seeing, and then you get into these time series things where it’s voltage or temperature or whatever—it’s not so intuitive. So, very important to have someone who already understands what the data means.

Dr. Felizitas Heimann: Exactly. And I believe in the midterm things will get a little bit more easy for the, say, general automation public to handle because what we will see are more ready-to-use applications evolving. For example, here in Germany—so sorry for the German word—our Mittelstand, so, our small and medium enterprises, who really have a fantastic domain know-how, that are sometimes world leaders in their small domain. I would expect companies like these to deliver really great, highly-specific solutions, and then roll them out on the basis of the big platforms—like for example, our industrial Edge management. And by that, things will get more easy to apply and to handle even if you’re not a machine learning specialist.

Kenton Williston: Yeah, for sure. I think we’ve seen that happen already in the vision space, where now there’s open platforms like Intel OpenVINO toolkit, where you have a lot of existing models that you can use as a starting point, and a lot of community knowledge about how to use those and how to avoid the kind of pitfalls like the lighting pitfall. And I think it’s definitely going to happen that we’ll see the same sort of things happen in other areas in the time series AI as well.

Dr. Felizitas Heimann: Exactly. And both on the hardware and the software level, there’s a great advantage of open architecture. And our industrial computers, for example, they enable the customer to use our devices either with their own software, with third-party software, or to go with our offering—which is continuously growing, as well as part of our industrial Edge ecosystem. And luckily for the AI modeling tools, it’s the same—they’re getting reasonably well exchangeable via standard. So you can sort of let your AI engineers work with their preferred tool set and then port it into your preferred industrial environment, including the operating system.

Kenton Williston: For sure, and that leads me to my next question. You mentioned the Siemens ecosystem, which is very robust, of course—you’re for many decades running one of the world’s leaders in industrial automation. And I think that’s an important thing for machine builders and systems integrators to keep in mind—is what’s already out there, and how they can best leverage this existing infrastructure. So, what do you see as being the critical considerations when it comes to working in these brownfield environments, and leveraging the existing infrastructure when you’re trying to launch some new AI technology?

Dr. Felizitas Heimann: Well, depending on the intended application, of course, you need to start at the assets and field devices level, and especially in brownfield applications—new applications as well—you will have an existing base. And you will have to ask yourself, are you able to collect all the data you need for your use case based on the existing sensors or other information? Or do you have to equip more? And what you will definitely have to invest on also in a brownfield application—there will usually not be the excess headroom in computing capacity to run the inference at the desired speed. So, in most cases, an investment needs to be made to add additional Edge computing performance close to the application, which, luckily, an industrial computer can be easily connected to the system via the standard known connectors or protocols.

Then, especially, our industrial Edge management software enables convenient server connection, remote deployment, and the remote monitoring. And, again, there we take great care to develop it in a way that it blends smoothly into existing environments. And then, of course, what’s planning a new line, the industrial PC resource can of course be counted into the overall performance requirement, and eventually be chosen in a way that the inference can run in parallel with other applications—for example, a local HMI monitoring.

Kenton Williston: That makes sense, and leads me to my final question for you—which is how machine builders and systems integrators can engage with Siemens and your partners, for that matter, to simplify the design process and get these really innovative new time series AI designs out there?

Dr. Felizitas Heimann: Well, there are many ways. For example, we’re building industrial digitalization specialists all over the globe—for AI, for industrial Edge, for new PLM software tools like digital twin, for example. No matter if your current contact to Siemens has supported you on hardware or software topics, , PLM software, or other—he or she can direct you to specialists who can support you either in finding the hardware and software solutions when you just need the suitable components, or even to arrange a consulting service to go on the journey with you together—supporting you with deep industry and domain know-how. And part of the Siemens customer individual offering, for example, can also, for example, be the model monitoring to be aware if parameters start to run away, and also if retraining is needed. And we’re continually enriching our portfolio on the hardware and software side as well. It’s really exciting to see how quickly things are moving in that field currently. And, for a starter, you can check that out at www.siemens.com/pc-based.

Kenton Williston: Well, with that URL, it’s a perfect place for me to end, and give some other places for our listeners to go for additional resources. So, thanks for providing that. And thanks for joining us today, Felizitas. Really appreciate your time.

Dr. Felizitas Heimann: Thanks having me, Kenton

Kenton Williston: And thanks to our listeners for joining us. To keep up with the latest from Siemens, you can follow them on Twitter @siemensindustry, and on LinkedIn at showcase/siemens-industry-us. If you enjoyed listening, please support us by subscribing and rating us on your favorite podcast app. This has been the IoT Chat. We’ll be back next time, with more ideas from industry leaders at the forefront of IoT design.

Reopening Public Spaces with AI

As the world reopens, creating safe public spaces—from retail, to transportation, to leisure—is both a challenge and an opportunity. Digital signage technology can reassure customers that the environment is safe, but can it also deliver an advantage to businesses?

Kenton Williston, Editor-in-Chief of insight.tech, talks with Jaume Portell, CEO and Co-Founder of Beabloo, whose solutions combine digital signage, with analytics and AI. We’ll find out how Beabloo can help businesses and government organizations make public spaces not only safe, but inviting—while providing great return on investment.

Kenton Williston: Jaume, welcome to the show. The first thing I wanted to ask you is what exactly led you to start Beabloo in 2008?

Jaume Portell: We started thinking that we could bring some of the intelligence that was deployed in the e-commerce sites—where e-commerce was controlling the message and measuring the impact of every single step in the process—to brick-and-mortar stores. And the challenge was very interesting, because we had to bring the analytics first, and then also control the digital-message delivery in those physical locations.

We use computer vision and other sensing technologies to understand how customers move, what they want, what they’re touching. And then we adapt the communication—the value proposition of the store—using signage, using electronic shelf labels, and also sending hints to staff in the store about how to serve customers better.

Kenton Williston: The world was totally turned upside down last year with the pandemic. Since then there’s been a complete rethink about how public spaces—whether those are retail establishments, or airports, or whatever—need to run their operations.

And so all these technologies you’ve been talking about—in terms of being able to observe the behavior of people in public spaces and create intelligent analysis of that—has become valuable for completely new reasons. And I’m wondering what you see as some of the biggest challenges in public spaces as we move more towards reopening.

Jaume Portell: Well, first of all, we’ve been creating technology to improve the customer experience. That is what our technology does. It looks at what people need and reacts to it. In times of pandemic, you want to deliver messages to customers to help them understand that they are in a space that’s protected and clean, and where measures of protection are properly taken.

And that is what digital signage does in this context. We have analytics in that digital signage that senses how those messages are being understood by customers. So communicating the new rules of the game is extremely relevant. That means technology that senses risk, and communicates that risk to whoever can help to protect the staff, to protect the other users of that physical space.

And we do this with computer vision. We also add in some additional layers of artificial intelligence to clear noise from that data. We’ve seen that the hardware, in most cases that’s already there—lacked some additional intelligence for the situation where we’ve been. We created that intelligence; and that same hardware now helps you sell more, but at the same time also protects your staff and your customers.

Enhancing the Customer Experience

Kenton Williston: The implications around computer vision and AI—there are a lot of sensitivities around that. So, here in the U.S., for example, there’s been a lot of backlash—a lot of concern around how these technologies might be used, especially when it comes to sensitive things, like racial profiling.

But I know that you’ve done some really great case studies in airports that showcased the power of these technologies, and proved that people are really having a positive reaction to this technology. So can you walk me through what you did and the results you saw?

Jaume Portell: We deployed our intelligent digital signage network with audience analytics. So we sensed who looked at the content. And when we say “who,” we mean the computer-vision systems see faces and take note of for example, “this looks like the face of a man of this age.” There’s no ID; there’s no way we can re-create an ID from that. If the system sees the same face again 20 seconds later, it thinks it’s another face—a different face, probably same age, same gender, but it has no clue that that is the same person. So no data-privacy issues at all.

The system doesn’t record images; the system has been trained to count faces. It’s very specialized to measure certain demographic characteristics, but they are 100% anonymous.

That doesn’t mean that the system cannot understand things that can be of certain risk. It can detect two bodies that are too close to each other, or it can detect someone not wearing a mask, and it can automatically trigger a message saying, “Please remember that you have to wear a mask to be in this physical space.”

So that is how the artificial intelligence that we are deploying in airports is enhancing and improving that customer experience: it’s controlling the messaging, and adapting it to the situation. Right now, if you walk into these spaces and I tell you how safe the space is, you will feel confident in walking in. You will walk in way happier about that customer experience, and your experience with that space will be more positive, and you will be more likely to purchase something from that location.

Kenton Williston: But then there is the world more broadly of retail, and similar customer-facing establishments as transportation—like banks and food service establishments. What do you see as some of the big challenges there, and how are you addressing those?

Jaume Portell: So retail banking is in serious transformation all over the world. They’re trying to help us all use the banking systems online, and they also need to keep a certain number of branches open for higher-level, face-to-face services. Their goal is to improve the customer experience in those locations, and help us in the journey of financing our dreams while we’re in those locations. And that requires communication, and it requires customer experience. If our visit to the bank is a pain, then we don’t want to come back, and we might end up doing that financing by ourselves somewhere else.

And that is a lot about sensing, and it’s also about understanding what happened when you implemented certain campaigns. So banking needs sensing, needs digital transformation of the stores with computer vision—with those screens that explain the value proposition of the bank. And all this is perfectly possible with technologies like the one Beabloo is using, and it enhances that customer experience a great deal.

Adding a Layer of Intelligence

Kenton Williston: One last area that I want to touch on is, I think, probably the most challenging, if I’m thinking from a health and safety perspective. And that is in the space of hospitality and event venues and entertainment complexes—where the whole point of these facilities is to get people together and have them stay together for some period of time. So what do you see as being some of the most important trends in those sorts of applications?

Jaume Portell: Occupation is extremely relevant. Understanding when these spaces are full of people or not, is critical to cleaning them when they are not, and getting them ready for the next set of users. There’s a table—you want to know if it’s clean. Having computer-vision systems observing the space to make sure that after someone uses it someone went from the staff to clean it up, will give you the confidence that that place is okay.

Kenton Williston: The other big question I have is the cost question. So, it’s obvious to me how beneficial these technologies are. But I think there’s always a question of how affordable they are—especially given how tight budgets have been for a lot of the industries we’ve been talking about.

Jaume Portell: It is surprisingly affordable. Because how many retail spaces do you know that have security cameras, CCTV cameras? How many of them have digital screens? Many, right? The issue is not the hardware—it’s the usage of the value that that hardware is generating to actually make the places smarter.

And that means connecting those streams from the cameras to a computer—an Intel® computer that can get the information out of those streams, make sense out of it, and react in real time using the digital signage that already was available in the front door of the location. So what we’re talking about here is injecting intelligence into the hardware that is already in those physical spaces.

We are talking about software deployment. It’s easy, it doesn’t require much installation, and it’s creating value by itself immediately. And, actually, it usually pays off the hardware as well.

You can use all of it to create more value with Beabloo Active Customer Intelligence Suite, which will sense with the cameras, sense with the Wi-Fi access points, and understand what’s going on. Understand what’s working, what’s not working in your digital campaigns, and improve the selection of them based on the hour of the day, based on the day of the week, to increase your customer service perception and the conversion to sales.

Sharing the Intelligence

Kenton Williston: I want to think a little bit about where we’re going next, and what organizations that manage all these different kinds of public spaces should be thinking about as we move forward into the next, post-pandemic era.

Jaume Portell: The first thing is, when you start thinking about improving customer experience, or taking care of your customers in a situation like the one we are in right now, you are sensing customers and giving them what they want as soon as possible. This is exactly what you want to do when the pandemic is over.

Today, this morning, was a sunny day in Barcelona. A sunny day has a completely different behavior pattern in the retail environment than a rainy day. Today is Thursday; the purchase pattern of customers in a particular store is different than it will be tomorrow and it’s the day before the weekend. So these things are seen, perceived, and understood by machine learning algorithms that can explain it to the staff in the retail store. And it can tell them, “Look, today’s Friday, and this weekend there’s going to be good weather. So most of our customers will buy barbecue stuff from the supermarket.”

The intelligence is there also to empower the staff to provide a better customer experience. And that is the next big thing—sharing that intelligence with everyone. Sharing that intelligence with the customer, sharing that intelligence with the staff. This is using the intelligence of the store to help the staff to do their work better.

So we think that the next big wave is making sure that the intelligence is collected from the hardware as much as possible, so that everything that we see can improve that customer experience. It is the tech that is analyzed; and then transforming that into action—action to the customers, and action to the managers of the stores.

Kenton Williston: So with that, I’d just like to say, thank you so much for joining us today.

Jaume Portell: Thank you. Thank you very much for your time.

Factory Digital Transformation Starts with Infrastructure

Smart Manufacturing. Industrial IoT. Industry 4.0. Whatever you choose to call it, manufacturers are racing headlong toward digital transformation.

But a number of manufacturing-specific challenges have slowed the pace of progress: the varied mix of machines on the factory floor, for example—which often come from different eras and vendors, and speak different languages.

Manufacturers typically have dealt with this situation by deploying point solutions for a single machine or use case. But even when this approach solves a particular problem very well, say predictive maintenance, its ROI is limited if the benefits don’t extend to the rest of the factory. And they can’t when the machines themselves—as well as their network standards, protocols, and infrastructure—are isolated from every other system in the factory.

Industry 4.0 demands the opposite: connected machines and integrated data, so manufacturers can expand an exciting use case, add new ones, and easily repeat them across machines and factories.

Such scalability is achievable, but it requires a new approach—one informed by perspectives from the IT and OT sides of the business. Transforming the whole factory at once isn’t possible (or desirable). But it is possible to approach smart manufacturing projects, however small, with a focus on how to scale their benefits. And it starts with simplifying the underlying infrastructure.

Industrial Infrastructure for Digital Transformation

Deploying smart factory applications as pilot projects makes good sense for manufacturers leery of jeopardizing production. But the complicated maze of infrastructure systems left behind—what Todd Edmunds, CTO of Smart Manufacturing and Industrial Edge Strategy, Dell Technologies, calls “accidental infrastructure”—needs tidying up before any major digital transformation can take place.

How? “By taking what IT knows about enterprise-grade infrastructure, and deploying a streamlined version of it at the factory edge,” says Edmunds.

Specifically, Edmunds points out the benefits of hyperconverged infrastructure, which combines storage, processing, networking, and security. By bringing all of these capabilities into a unified platform, hyperconverged infrastructure offers a highly scalable and repeatable foundation for digital transformation.

But, of course, infrastructure is useful only when it can connect to industrial computers. That’s why Dell Technologies partnered with PTC, whose ThingWorx software serves as a translation layer between the varied pieces of technology on the factory floor (and their different languages) and the enterprise-style infrastructure. “PTC knows OT very, very well—which is a big part of the reason we partnered with them for this solution,” explains Edmunds.

In short, the hyperconverged, software-defined nature of the Dell Technologies and PTC solution makes scaling easy. Because the platform is templatized, new applications can be added easily over time in a drag-and-drop way. And there’s no need to work with specialized teams and equipment to add infrastructure to support them. Instead, it’s just a matter of adding more servers, which can be bought as a small capital expense upfront or, soon, as infrastructure-as-a-service.

“When manufacturers have infinite compute capabilities, they can dream up an infinite number of new use cases—and new heights of efficiency.” @ToddEdmunds @DellTech @PTC

OT-IT Convergence and the Seeds of Innovation

With such an elastic compute pool, even advanced use cases that require huge amounts of memory and storage—augmented reality applications, for example—can be quickly dropped on top of the infrastructure as needed.

This kind of efficiency generates huge savings. “Now manufacturers and their systems integrator partners don’t have to engineer a separate infrastructure for each new use case and then try to get them to communicate,” notes Edmunds. The Dell Technologies and PTC solution automatically integrates different sources of data, which gives manufacturers the time and money to use that data in innovative ways.

But to reap the full value of this IT-style, repeatable infrastructure, Edmunds says IT and OT must work together. “It’s not enough to bring these enterprise technologies to the factory floor, where they haven’t existed before,” he adds.

While IT is responsible for managing the higher infrastructure layers, and brings their large data center expertise to the task, OT owns the edge compute layers. So both camps need to be involved in any smart manufacturing projects from the start.

After all, even if OT has traditionally been slower to embrace change, they know the machine data and how it should be used better than anyone. And ideally, they would also come to embrace new technologies for how they can improve, not derail, production.

Next-Gen Manufacturing

One customer, a manufacturer specializing in aerospace systems, needed to improve its product lifecycle management so it could provide crucial equipment like propellers at a moment’s notice. So it deployed the Dell Technologies and PTC infrastructure solution to get enterprise-level, real-time visibility into its fleet.

Now it’s building next-generation use cases like digital twins—projects that will be worth billions of dollars. And it’ll be able to quickly scale them across hundreds of thousands of assets and locations.

Simplified, IT-grade infrastructure at the factory edge helps manufacturers leave technology silos and the slow march toward digital transformation behind. “Because when manufacturers have infinite compute capabilities, they can dream up an infinite number of new use cases—and new heights of efficiency,” says Edmunds.

AI, Security, 5G: New Intel® Xeon® Processors for IoT

Dr. Sally Eaves & Yazz Krdzalic, Public spaces

[Player]

The 3rd Gen Intel® Xeon® Scalable processors are a big deal for the IoT. New AI accelerators, major security upgrades, and a ton of I/O and memory enhancements are just the start for these CPUs, formerly known as Ice Lake. Whether you are looking for more performance at the edge or better ROI in the data center, there is a lot to like.

In this podcast, we take an in-depth look at the new parts with Yazz Krdzalic, Director of Marketing and Business Development at Trenton Systems. As a leading provider of high-performance, high-security systems, Trenton Systems has a unique perspective on the ways the latest Intel® Xeon® processors fit into the IoT landscape. Together, we examine:

  • Which features of the 3rd Gen Intel® Xeon® Scalable processors matter for IoT applications
  • How to use tools like Intel® oneAPI and Intel® OpenVINO to accelerate development on Ice Lake
  • What’s trending in retail, transportation, hospitality, and education

We’re also joined by Dr. Sally Eaves, an expert in cybersecurity, IoT, 5G, AI, and blockchain. To get more of her perspectives on Ice Lake, see her blog Expert Review: 3rd Gen Intel® Xeon® Scalable Processors.

Apple Podcasts  Spotify  Google Podcasts  

Transcript

Dr. Sally Eaves: I was really impressed with, for example, the improved AI-acceleration capabilities. It’s showing it’s a great alternative to GPUs as part of this, and other dedicated accelerators as well. So I was really impressed to see that.

Kenton Williston: That was tech expert Dr. Sally Eaves, and I’m Kenton Williston, the Editor-in-Chief of insight.tech. Every episode on the IoT Chat I talk to industry experts about the technology and business trends that matter for developers, systems integrators, and end users. Today I’m joined not only by Sally, but also by Yazz Krdzalic, the Director of Marketing and Business Development at Trenton Systems. We’re going to take a close look at the new, third generation Intel Xeon scalable processor—formerly known as Ice Lake—as well as some of the surprising ways Intel is changing, and what all of this means for the IoT space. I got to live tweet the launch of the new chips with Sally a couple of weeks ago, and I was pretty impressed with what I saw. I can’t wait to dive deeper into Ice Lake, but, first, let me introduce our guests. So, Yazz, welcome to the show.

Yazz Krdzalic: Hi, Kenton. Thanks for having me on the show today.

Kenton Williston: Absolutely. So, what does Trenton Systems do?

Yazz Krdzalic: Yeah, sure. Trenton Systems makes trusted, cyber-secure, made-in-USA, high performance computing solutions—and that’s across the industrial, military, and commercial sectors. So our system supports critical IT infrastructures around the globe, and we help reduce latency, provide real-time insights at the Edge, while also securing sensitive and confidential information. So, in essence, if a use case requires lots of computing power, needs to operate around the clock in real time, needs to keep sensitive data encrypted or protected, and that’s in a climate-controlled or harsh environment, our trusted computing solutions answer that call of duty.

Kenton Williston: Great. Well, I look forward to learning more about that. But before we get into that, Sally, welcome to you as well.

Dr. Sally Eaves: Thank you, Kenton. Lovely to be with you.

Kenton Williston: I’m happy to have you here. Can you tell our listeners a little bit more about yourself?

Dr. Sally Eaves: Absolutely. So, I’m a Chief Technology Officer by background, and now mostly do advisory around emergent technology subjects. So, across cloud, cybersecurity, IoT, 5G, AI, and blockchain. I’m also passionate about tech as a force for good, so I have a nonprofit called Aspirational Futures. It’s very much around inclusion and diversity in tech, and really building that out. So, as well as tech change, I also look at the aspects of culture, skills, sustainability, and social impact that can underpin that too. And I’m also an author in a very active online building community around these subjects.

Kenton Williston: Yes, and speaking of being active online, like I said, we got to live tweet the launch together, which was really cool. So I want to start there, by talking about that launch event. What was your biggest takeaway from that event?

Dr. Sally Eaves: Absolutely, and you’re spot on there in terms of the event live tweeting—it was a really action-packed launch. I think one of the best event presentations I’ve seen in some time, to be honest. It was brilliant. A lot of superb content. I love the fact there were many real-world, tangible examples in there, and research that was underpinning things, so there’s such a lot to share. I think, for me, it’s hard to pick one area, because I really came up with four that came to the fore for me. So—built-in security, embedded AI, network optimization—but also “tech as a force for good.” So I think if I have to pick one that came to the fore and brought these things together, I’d actually focus on that, because before anything was mentioned around the amazing advancements in processor innovation, the event actually started with a focus on community, and the keynote was really asking the question: what’s your why? So I was super impressed to see that taking center stage.

I mean, there were initiatives such as Intel RISE that was spoken about—$20 million for social impact projects. So, for me, that’s a great example of shared values, and an impressive statement on tech leadership, and really listening to what, I think, what people are really wanting to see from technology organizations. So I thought I’d flag that, because I think that was a really, really impressive thing to do.

Kenton Williston: Yeah, absolutely. Totally agree with that—saw many of the same things as you did, and certainly not the least of those was huge emphasis on AI. I think, certainly from the IoT perspective, that was one of the things that really stood out to me—a huge, huge emphasis on how much additional AI performance is in these new Ice Lake chips, compared to previous generations. And I’m curious what you think about why did Intel have such an emphasis on AI? And what this says about the way computing is evolving.

Dr. Sally Eaves: Yeah. I think, for me, it reflects that AI is one of the fastest growing workloads out there at the moment. And I was really impressed with, for example, the improved AI-acceleration capabilities. It’s showing it’s a great alternative to GPUs as part of this, and other kinds of dedicated accelerators as well. So I was really impressed to see that. I think something else that came to the fore for me was the fact that this is bringing together influencing capabilities across simulation and also data-analytic workloads. I don’t think that gets enough attention sometimes. I think we sometimes look at AI only with attention to analytics, but this simulation aspect is incredibly important for AI as well. Data-intensive simulation is so huge, so I was really impressed to see that. I think that’s going to be so important to emerging use cases.

For me, the other thing that came to the fore was the Intel DL Boost facility. Again, this AI acceleration is so important for areas, for example, around 3D—for gaming, for different use cases that are coming to the fore as we see this confluence really, across different tech trends. So this focus on AI had to be at the fore, so I was really impressed to see that, and maybe beyond that as well. I’ve talked a lot recently around data waste and model waste. I think some of the advancements here are really going to support things around AI ops and model ops approaches as well. We’ve already seen examples where data scientists are using the new processor to help build out and deploy increasingly smarter models, and improving that rate to go from POC to production. So, really impressive to see that, and it’s showing how we need these specific features, and how important they are, to have them for specific workloads too.

Kenton Williston: Yeah, that’s very interesting that you mentioned the modeling aspect, and I’m wondering what you’re thinking about there. What comes to mind for me are things like genomics—things like, if you’re an auto manufacturer being able to model parts. Are those the sort of things you have in mind? And just curious what stuck out to you—why that really drew your attention.

Dr. Sally Eaves: Absolutely, that’s certainly one of the areas that comes to mind. I mean, I was speaking with a leader in the NHS only this morning, actually, which I think would be quite relevant. He’s Head of Information at one of the leading innovation centers for healthcare in the UK. And he was talking about the experience that they’re having with their typical datasets. They’re historical, typically stable—in many cases, it’s just not fit for purpose any longer. They need a more open, predictive model. They need the capacity to retrain expediently, because if you think about what’s been happening in healthcare over the last 12 months, normal rules have not applied. Things around emergency care, people being fearful of going into hospitals, elective waiting time performance has changed—things have been put back—and re-admittance rates have changed as well. Things are drifting quicker than they would have done 12 months ago. And it’s a problem if you’re not cognizant around it, and if you don’t have the right tools in place to monitor and to adjust.

So I think these open, predictive models are really key around that. And some of the announcements I saw at the event I think really support everything that they are suffering from at the moment—the problems they’re experiencing. So I think we’ve got some great benefits there.

Kenton Williston: Yeah, for sure. And I know across industries the last year has really rewritten the books on a lot of models. So, for example, one of the things we’ve been talking about in the insight.tech program has been supply chain issues. And of course there’ve been all kinds of problems there—from a shortage of toilet paper, to ships getting stuck in the Suez Canal. I mean, it’s just been wild. So all of the, let’s say, Excel-based, very simplistic approaches to supply chains have been really badly broken by all the chaos of the last year, and AI has been extremely helpful in helping companies cope with those chaotic changes.

Dr. Sally Eaves: Absolutely.

Kenton Williston: Yeah, and that leads me to a question for you, Yazz. So, Intel is reporting some very impressive gains in performance in these AI, deep learning, machine learning kinds of areas. They’re talking about 1.5X faster across a suite of workloads, and just more broadly, the roughly 50% performance increase overall. So that’s, I’m sure, going to have a pretty big impact for your customers. So where do you see the big wins in terms of how this performance is going to help your customers, and things like industrial and other types of IoT applications?

Yazz Krdzalic: Sure. And I’m honestly quite tempted to say, everywhere—across the board for our customers. Because when we think about deep learning and machine learning, efficiency—or that repeatable execution of those complex algorithms that learn, forecast, execute these tasks repeatedly at a rapid pace, without issues—is key. The reason really is the better, the faster predictions you can make, the better the business outcome. So when you say 1.5X faster, or almost a 50% boost performance, that is truly monumental. As for the way it will have the biggest impact in industrial or other IoT applications, I would really like to mention the fourth Industrial Revolution—or Industry 4.0, as we call it—that will benefit greatly from this performance boost. You start thinking about smart factories, smart sensors, IoT devices collecting lots and lots of data. So you have a ton of these end points feeding into a system with Intel’s third-gen Xeon CPUs. And, as Sally mentioned, with AI acceleration already caked into it.

So it’s truly a beautiful symphony. You’re receiving a lot of this data using these complex algorithms to make accurate, calculated decisions at record speeds. And one of the applications that jumps out immediately—think image recognition and analysis. Say you have a train going down the tracks carrying lots of precious, heavy cargo—that train eventually has to stop for a manual inspection, which takes lots of time, lots of resources—simply disrupts the flow. So now you put the 4K cameras, for example, on the side of those tracks with a ruggedized system, such as Trenton’s latest BAM server, using Intel’s third-gen Xeon CPUs. Now you are able to snap images of the train as it passes by, never stopping, and this image recognition and analysis can quickly scan, calculate, analyze, predict, and make decisions without stopping the train, if there is no need. So Intel’s latest CPUs can do this one-and-a-half-times faster, or almost 50% faster. I mean, how cool is that?

Kenton Williston: Yeah, it is cool. I’m glad you mentioned this a rugged mobile example, because I think that’s one of the things that maybe wouldn’t have been immediately obvious to everyone attending the launch event. You think about Xeon, you typically think about the data center—climate control, giant racks of servers—and that’s certainly an important application for these parts, but they’re also extraordinarily important for the in-the-field Edge computing applications that you’re talking about there. And I think having this additional performance is really huge, because there are lots of applications—like the one you mentioned—where you just need immense amounts of compute power at the Edge to get the job done.

Yazz Krdzalic: Correct. And it’s honestly—you mentioned at the Edge, and it’s also at the tactical Edge, so harsher environments even. And so for that capability to be present, whether you’re in a climate-controlled environment or at that Edge, it’s still—you get that same performance boost. So, like I said, I’m tempted to say that the benefit will be seen everywhere—industrial and other IoT applications—simply because of that performance boost.

Kenton Williston: Yeah. And I would say even a lot of it comes down to scalability. I mean, this just puts a higher ceiling on the performance that is available at the Edge, which in some ways is an even bigger deal than the data center/cloud, where you can always put another server rack in. That’s not always so easy on your train, or wherever it might be out there in the field.

Yazz Krdzalic: Correct. Yep, exactly.

Kenton Williston: So the other thing that really stood out to me were the security features, and Yazz, you were talking about how, for your customers, this element of security is a pretty critical part of what they’re doing. And I would like to know more about what kind of security features your customers are asking for, and how the new features in Ice Lake might relate to that.

Yazz Krdzalic: Sure. It’s almost one of those—how much time do we have to talk about security? Because everywhere you look that seems to be a major topic now—or a major, I would say, sector of concern within the IT world. So our customers are very concerned about it, as you mentioned, whether you were in the data center or at the Edge—the security of those systems, it has to be top notch. And we’re not just talking software-level concern anymore—we look at the system and the environment holistically. So, this means where your components are sourced from—so, an actively managed supply chain, what do these components provide? So, hardware-level security, firmware that’s running on the hardware, and of course, the software piece as well. We can call this—and I think a lot of people are referring to this now as confidential computing—we tend to refer to it as system hardening.

But, at the end of the day, our customers want to be protected from unwanted access to their data—to those sensitive pieces of information. And Intel security features such as TME, or total memory encryption; SGX, software guard extensions; or PFR, platform firmware resilience, are just some of those key features that are truly enhancing system-wide security. Honestly, when you just hear those three terms I mentioned—encryption on the memory, guarding the software, and protecting the firmware code. So that’s just crazy good, if you ask me. And you asked what our customers are asking about—exactly that. So that’s why Trenton Systems is an Intel house—it plays well with our customers, our dedication to system hardening, and our determination to reduce the attack surface and thwart any potential attacks, and providing our customers with the most robust, most advanced cyber-secure solution powered by Intel the market has seen.

So I can tell you firsthand—our customers love that story. Why? It’s true, it’s tried, it’s tested across some of the harshest environments, running mission critical applications across the globe. So when it comes to security—truly I cannot stress enough, it is P1 for most of our customers at the moment. And having these features from Intel on this next-gen platform is just critical to what we do and how we provide our servers to the world.

Kenton Williston: I’ll tell you what Yazz, if I ever hear from Intel that they need a security evangelist, I’ll know who to suggest.

Yazz Krdzalic: Please do.

Dr. Sally Eaves: You’re hired.

Yazz Krdzalic: Nice. Wonderful. I love it.

Kenton Williston: Sally, I know you were also pretty impressed by the security features. Do you have anything to add to what Yazz said about those?

Dr. Sally Eaves: Yeah. I love Yazz’s comments there. I couldn’t agree more in the fact that security is front of mind. It doesn’t matter if you’re enterprise or SME—and we’ve seen a big acceleration in attacks in malware, SME market. Just a report I’m doing at the moment actually, has shown a 400% increase in vulnerability to cyber threats. So every aspect—it’s front of mind. And so embedding this in by design, which is what we saw through some of the examples that Yazz was sharing just now—that breeds trust, it gives that confidence. So it’s absolutely vital. I can’t stress—I couldn’t agree more strongly that this is such an advance forward. And, for me, another thing that came to mind specifically was a software guard extension, or SGX. So that’s using hardware-based trusted execution environment, and it can isolate—it can help protect specific application code, and also data in memory.

And, as Yazz was saying, to an extent it’s about these confidential computing and privacy-preserving techniques. So I’m really excited about that. The benefits of data sharing, or basically not sharing the data itself—it’s so critical. That healthcare example I gave earlier as one example of that is where we can get a lot of benefits. It was shown in the past that if you can enable that confidential computing, that safe data sharing—there are huge economic benefits from doing that. But beyond that as well, some work I’m doing with UNESCO—I can see some amazing benefits to be able to apply that in situ there. So I’m excited where that’s going.

I think I’d also mention about the acceleration as well—the Intel crypto acceleration—is that reduces the impact of full data encryption, and it increases performance. So that’s hugely important as well for encrypted sensitive workloads. So I’m really impressed where this is going. And I think the other thing I would mention as well—and I wrote about this in a recent blog—but there’s a lot of support for community here. I love to see things that are supporting and enabling developers, reducing restrictions, enabling more time to be more productive and creative. I think we’re doing that with some of the advancements we’ve seen here.

Kenton Williston: Excellent. And just one point of clarification: when you say “SME,” I assume you’re talking about small to medium enterprises?

Dr. Sally Eaves: Absolutely, yes.

Kenton Williston: Okay. Got to check. I never know if Americans and Brits talk the same language almost. Maybe we’re talking about biscuits versus cookies; is it a boot or a trunk—who knows?

Yazz Krdzalic: I’m glad you clarified that. Here I am thinking, “subject matter expert.”

Kenton Williston: Right. So, of course we’re really only going to be able to scratch the surface in this conversation of all the new and improved features that were discussed at the event. But I do think it’s worth spending at least a little bit more time talking about some of the other things that really stand out for IoT-type of applications. So, Yazz, in particular, some of the things that stood out to me—and I think your customers would care a lot about—would be some of the IO upgrades like the upgraded PCIe. So can you tell me a little bit about what really stood out to you on that front?

Yazz Krdzalic: Sure. And honestly, it’s tough to pick just a few, but, as you said, we’ll have to condense for the purposes of this podcast. But we have an increased core count for a performance boost, so that, to me, stands out immediately, because as you know, these different applications that we’re thinking about in the future—they’re just requiring more and more performance, and having an increased core count does just that, and, as you mentioned, PCIe gen-four support. So one of our newest servers has 11 of these by 16 PCIe gen-four slots, and they are talking directly to the CPUs. And speaking of that, they are also supported by increased memory bandwidth—memory capacities at six terabytes per CPU. And now you think about our latest BAM server—it has 24 DDR4 drives that are talking to the latest Intel CPUs, who are communicating to the PCIe gen-four slots. That increases system-wide performance.

And Sally mentioned this a couple of times—AI acceleration already built in. I think she also touched on the crypto acceleration. The system supports Intel Optane persistent memory 200 series, which is the next-gen persistent memory—increases agility, and access to more data that’s closer to the CPU. Again, think speed here. And of course, as we mentioned before: the security enabled, and reliability through the roof. So those are just a few that jump out at the moment, but I think, to me, the list is truly endless as far as all of the benefits that our customers care about.

Kenton Williston: Totally agree, but I also have to point out that this is all great, but you take some software, you take advantage of these feature—one small consideration there. And it was interesting to me to see some of the frameworks called out in this launch event. And I think over the last couple of years, Intel has really been broadening its vision—not just thinking about itself as a chip company, but putting more and more investment into software frameworks. And I think, particularly when we’re thinking about the high performance—whether it’s AI or other kind of workloads—the frameworks that they built—called OpenVINO and oneAPI—are particularly important in actually enabling you to use these high-performance features. So I’m curious how the folks there at Trenton are actually using these frameworks, and how your customers are using these frameworks to take advantage of the next-generation features?

Yazz Krdzalic: Sure. I’m sure one of our software engineers would love to talk your head off about this. I’ll try to summarize. Our in-house software engineers use oneAPI—specifically the System Bring-up Toolkit within it. So, it’s used with all new designs and helps around BIOS customization that our engineers tweak on behalf of our customers, where or when needed. And that’s a key differentiator even for Trenton Systems, and it helps our engineers work more efficiently. So the key takeaway here for me is Intel is building the hardware and the tools for customers—OEMs such as Trenton, and the very engineers using it. So just goes to show how meticulous Intel really is when it comes to building quality product, and dedicated to enabling engineers to get work done better, faster than yesterday. And so it really spans across that entire ecosystem—from us as a manufacturer of these systems, to the customers that we sell them to. You have this performance, security, and everything we’ve mentioned before, but you also now have a lot of these tools that enable you to do your job better, faster.

Kenton Williston: Yeah, absolutely. And I think, again, it’s lovely to talk spec sheets and geek out with the engineers, but at the end of the day, it’s those real-world benefits that matter. Can you more readily catch a defect, or increase the security and safety or your system, or whatever it might be. Those real-world benefits are what actually matter. And to that point, Sally, that was something that actually stood out to me about this launch. I agree—this was one of the best launches of a tech product that I’ve seen in quite some time, and I felt that way because there was so much emphasis on real-world benefits. And I loved how Intel brought in all of its customers and partners—it wasn’t just Intel tooting its own horn, but people talking about what they’re doing, and the real-world advances that are powering through this technology—including not least of which are the advances in doing things for the benefit of society and the environment and things like that. And so I’m curious what you think about this different philosophical direction Intel is taking.

Dr. Sally Eaves: Absolutely. And I have to say I loved it—I really, really did. I thought in the keynote, the phrase “new day” was mentioned, and I thought that summed it up really, really well, and there are lots of different points to that. So there was a lot of listening to the community that was going on, as you were saying there—showcasing real-world projects, which I absolutely love to see. But there was integration, as we talked about just now—bringing together software, silicon platforms, packaging, process-at-scale manufacturing, and really showing how to tackle the problems that organizations are suffering from. So I love that there was a focus on supporting customers—reducing complexity. I love the fact that we were talking about how to support when we go into the performance stage of things. And so one other thing I noticed as well was there felt like there was a clear focus—strategy change, and really looking at how to build upon the performance that Intel’s delivering as you move into deployment.

So not just focusing on the next generation, but supporting what’s happened and doing continual updates, etc. as well. So I think that’s supporting a lot of problems that people can have in the field, so it’s great to see that as well. There were also a lot of complementary launches going on, so it felt like a real development across the portfolio, which I think is hugely important to give that adaptability and performance for various workloads. It was great to see that. It feels, again, more of a systems-level solution approach. So I think that’s very, very important to support organizations, support customers, reducing complexity—which is coming up again and again as one of the biggest challenges, particularly across distributed environment. So, great to see that.

And going back to “tech for good,” listening to community—there’s a sustained commitment there. I believe wholeheartedly that leadership in technology goes beyond the tech—it’s the people behind it, it’s what you apply the tech for. And the fact that this is a continuation—back in April 2020 we had the Pandemic Response Technology Initiative that Intel launched, and that did an incredible job. Only a year later we have this new approach, with Intel RISE Technology Initiative. So we’re seeing that it’s not a one-off, it’s not an add-on—it’s truly embedded, and it’s supporting projects. And again, it’s supporting collaborations as well—it’s bringing people together, it’s co-creating solutions, bringing technology and talent. And there was that takeaway quote at the end: “technology is magic”—I could not agree more. It was really inspirational. And that’s what I just love to see, and it just changes the narrative in what we can do with technology. So I was hugely impressed by that.

Kenton Williston: Yeah, absolutely. And Yazz, I’d really love to hear—you are an Intel customer—if you’re seeing any difference in the experience in working with Intel, and also if you find yourself thinking about the technology landscape a little bit differently these days. And if so, how you’re thinking about things?

Yazz Krdzalic: Of course, certainly. And I do also—would like to say that the launch was one of the best things I’ve seen. Being the marketing guy, I geeked out all over it. I love the special effects and the content within it. And speaking of changes, that alone shows the dedication right out the gate—that Intel wants to differentiate themselves and wants to grow as a company. So, as an Intel partner, we get to stand shoulder to shoulder with Intel in developing, testing, and perfecting these next-gen technologies for our collective customer base. So Intel over the years has invested so much that it seems to just have doubled down on the strategy to improve the products, services, processes, and procedures, and the Intel brand—to adhere to the greater good of all. It truly feels that the mission has become a betterment of all, rather than the company.

And so Intel is also determined to make their partner ecosystem be a collaborator in tomorrow’s technologies, and Trenton Systems is truly honored to be part of this mission. As I’ve stated before, we’ve been an Intel house since inception in the late ’80s, and since then we’ve come a long way, but the quality of the product has only improved, and, with it, the trust from partners, as well as customers. We’re currently working on the sixth early access program to collaborate with Intel on the next-gen technologies, and we’re truly excited to be part of that story and effort, and look forward to supporting our customers with Intel’s latest and greatest for generations to come.

Kenton Williston: Wonderful. Well, so speaking of generations to come, I’d like to wrap up, Sally, with a question to you about, given everything we’ve talked about—the AI, the security, the “tech for good,” and so on—what are some of the key trends we might want to keep an eye on as we’re moving forward?

Dr. Sally Eaves: Absolutely. And I love that summary of Yazz’s just now, I love that—it was fantastic to hear. I think, for me, I think number one, just reflecting on the “tech for good” aspect for a minute, the rise of social business. I thought it was a huge statement around that and embedding impact, embedding community, embedding collaboration at the very heart of digital transformation. So, not on the periphery—that came to the fore. So, very much a shared-values model of business—it was great to see that. In terms of tech trends, I think we’re seeing the continued confluence basically across four key areas. So we have around hybrid cloud, AI—as we’ve talked about in depth today—being infused more into the Edge, network and cloud, 5G deployments. We’re seeing also these heightened demands of 5G in area working a lot, really pushing these compute resources closer to where data is created and where it’s consumed.

So it’s really about this integration, this confluence of these trends going forward. And I think they were going to have a huge escalation across those areas—particularly with 5G becoming more and more adopted over the coming months. So I think I’m excited to see where that’s going. And again, this flexibility around performance, which has been at the very heart of the event—”performance made flexible” was really the keyword around that—is going to be hugely important going forward as these trends literally continue. Other areas to look at—the data center of the future looking incredibly different in a more distributed location, size, and also built on both private and public cloud computing. Storage and memory increasingly disaggregated. Security, as we’ve already seen, being architected continually at the chip level and continually enhanced. And this increased flexibility and this bringing together—as I was talking about in terms of strategy—across hardware, software, this integration, and now obviously application and services as well. So, yeah, I’m really excited about where we’re going here, but the future is definitely integrated.

Kenton Williston: Wonderful. Well, that just leaves me to say thank you, Sally, for joining us.

Dr. Sally Eaves: My pleasure. Love to speak to you both.

Kenton Williston: And thank you, Yazz, as well. Really appreciate your time.

Yazz Krdzalic: Certainly. Thank you.

Kenton Williston: And, of course, thanks to our listeners for joining us. And you can keep up with the latest from Trenton Systems by following them on Twitter—you will not be surprised: @Trenton Systems. And you can find Dr. Sally Eaves at—again, no big shocker here: @Sally Eaves. That’s S-A-L-L-Y, E-A-V-E-S. And if you enjoyed listening, please support us by subscribing and rating us on your favorite podcast app. This has been the IoT Chat. We’ll be back next time with more ideas from industry leaders at the forefront of IoT design.

Your Guide to Dell Technologies World

Want a first look at the latest in AI, ML, and edge-to-cloud technologies? Come to Dell Technologies World 2021 on May 5-6. At this free virtual event, you’ll connect with technology gurus, engage with interactive demos, and hear from thought leaders. Register today and read on for our list of must-see events.

Grow Your Edge Compute Toolbox

Discover how Dell’s embedded next-gen technologies can give you a leg up in the session Designing Industry-Grade Platforms for Performance and Remote Management at the Edge.

For even more, see how Dell PowerEdge computing helps customers innovate with advanced compute technology for AI and the Edge. You’ll find out in the session Discover the Dell Technologies Innovations That Power the Next-Generation PowerEdge Servers.

Take a deeper dive and get all the details on the Intel® 3rd Gen Xeon® Processor—the power behind the latest Dell PowerEdge servers. Add this session to your must-see list: Discover How Next-Generation Processors Can Boost Your Key Workloads to learn how the latest Intel® processors enable optimal data access, performance, and security.

While we’re on the topic, dynamic hardware security helps IT departments become less reactive and more strategic as they build resilience against threats. Check out the on-demand session Secure by Design: Delivering Results While Relieving Pressure to learn more.

#DellTechWorld 2021 is your free pass to the client, #ML, edge, and enterprise compute technologies that are unlocking the possibilities of our data-driven world.

AI at the Industrial Edge

It may go without saying, but we’re saying it anyway. There’s no Industry 4.0 without edge AI. This makes the live session Advancing Industrial Intelligence: Getting New Insights at the Edge with AI a must-attend. See how AI-powered edge technologies bring innovation on the factory floor—and beyond.

Another don’t-miss session is Get Smart: Intelligent Solutions at the Industrial Edge. Here Dell, VMware, and PTC discuss how a smart, scalable, end-to-end design approach can transform manufacturing operations.

Way out on the edge you’ll find this on-demand case study: Edge Computing and AI Help Keep Trains Running Safely. Hear how using edge compute, advanced real-time analytics, and AI to monitor rail cars is keeping railways safe.

Technology’s Real Impact

Technology trends have implications for the way we live and work, connect with one another, the global economy, and beyond. In this guru session Shaping The Data-Driven World, Dell Technologies CTO John Roese and entrepreneur Jennifer Zhu Scott explore the wide-reaching impact of high tech.

COVID has certainly shaped the world, and nowhere as it has in healthcare. An expert panel shares its knowledge in the session Accelerate Digital Transformation for Healthcare-Life Sciences with a Modernized Technology Strategy.

The retail market—also hard hit by the pandemic—is fraught with challenges. There’s a lot to learn from M3T, which in collaboration with Intel is tackling these challenges—for today and tomorrow. Tune in to A Retail Journey — The New Normal to see what they’re up to.

Finally, your edge-to-cloud deep dive won’t be complete without a brief history of Intel’s 25-year partnership with Dell. Spend a few minutes watching the Interview with Navin Shenoy, Intel Exec VP and GM, Data Platforms Group, and find out what this longtime collaboration means for you.

Dell Technologies World 2021 is your free pass to the AI, ML, and edge compute technologies that are unlocking the possibilities of our data-driven world. Come see how you can harness their transformative power—and be ready for whatever comes next. Sign up today!

Smart-Building Security: Beyond Access Control

Many modern buildings are carefully designed to protect occupants from physical harm but are far less guarded against digital intrusion that can compromise their less-tangible assets. And even with cyber defenses in place, they are typically focused on PCs and servers, not the IoT endpoints and legacy control systems that turn a structure into a “smart building.”

A deeper conversation around cybersecurity infrastructure is required to prevent an increase in widely publicized cyber incidents like the retailer Target data breach and the infamous casino fish tank attack. Part of that conversation starts with recognizing that securing the various interconnected systems in a modern building is a very real challenge. That’s why one company, Veridify, a developer of security IP and tools, created the Device Ownership Management and Enrollment (DOME) platform to deliver a comprehensive security solution to address the range of challenges buildings face.

Smart Building Security: Detecting Things That Go Bump in the Wire

The task of securing a building-wide heterogeneous network with several thousand potentially vulnerable edge devices ranges from daunting to practically impossible. The different vendors that provide elevators, lighting, fire monitoring, and HVAC equipment all have their own systems.

Add the specialized IoT platforms now being deployed across many companies, and you’ve got a wide range of assets, each with its own standards, communication protocols, and supported features.

Of course, the greater the number of different systems and devices the more impractical it becomes to implement unique security measures for each. One alternative would be to use a single security blanket that covers all the connected systems in a smart building. In that case, a small-footprint software agent capable of establishing secure communications tunnels between systems and a security-as-a-service (SaaS) solution could be loaded onto the device at the time of manufacture.

Unfortunately, most of the connected security systems in smart buildings existed before the building became “smart”. This means that such an agent would likely have to be added to each system individually and then integrated with the larger security platform, which is a non-trivial endeavor.

But instead of attempting to secure every edge device individually, smart building operators could secure the communication lines of connected devices themselves using what’s known as bump-in-the-wire (BITW) technology.

A BITW is a security tool that can be inserted into a communication channel between two or more pieces of equipment, without impacting performance. In a network security context, a BITW would sit between a group of endpoints or edge devices and the rest of the building network, authenticating messages as they pass by.

To work effectively, BITWs can reside within a smart building system that is compatible with multiple network protocols, support industry standards for IoT security, and implement strong cryptography without compromising latency. It can ensure  that any device being used to access the building network has the authority to do so by working in tandem with a database of unique identifiers for every device within the building’s network.

The DOME plus BITW topology allows legacy controllers and older systems to exist alongside more modern #SmartBuilding systems without being left vulnerable. @Veridify via @insightdottech

Under the DOME Security-as-a-Service Solution

Veridify’s DOME solution operates similarly to a combined VPN service and device authentication platform: Endpoints do not need to connect directly to a cloud, BACnet, or any other type of operational technology (OT) network, only to their respective BITW owners.

Through these, DOME offers a secure, encrypted tunnel for smart building device authentication over a range of protocols, including BLE, BACnet, KNX, OBIX, Wi-Fi, and others.

The platform security starts with devices that have been provisioned with a security library, including Public-key credentials. These credentials are signed in an immutable blockchain that provides each endpoint with the ability to authenticate its owner and an unalterable identity, stored and managed by a DOME Interface Appliance (DIA).

The DIA can support both legacy and quantum-resistant protocols used by these endpoints, building automation controllers, and the central building management system (Video 1). This allows it to deliver secure firmware updates, building-specific configuration changes, device status reporting, and any attempts on the cyber infrastructure.

Video 1. DOME provides comprehensive smart building security, even if they aren’t directly secured by DOME. (Source: Veridify)

For newer systems that have yet to be deployed, the DOME Client library can be installed on endpoints while consuming only 12 KB of ROM. This allows it to be deployed on even severely resource-constrained systems.

But certain devices are not candidates for direct protection under DOME because they can’t be updated, are legacy systems, or for other reasons. In these cases, DOME can be extended through a BITW architecture and hardware security controllers like Intel® Max® 10 FPGAs that reside on the communications path between the endpoints and the network.

The DOME plus BITW topology allows legacy controllers and older systems to exist alongside more modern smart building systems without being left vulnerable. And thanks to the performance and flexibility of MAX 10 devices, security can be delivered at ultra-low latency over a variety of communications transports, even under load.

Make Smarter Security the Standard

Of course, this is just one aspect of the overall smart building cybersecurity conversation. Other talking points include threat modeling and assessment, physical device security, and cloud security access controls, to name just a few.

In the long term, cybersecurity standards for buildings will need to be defined, possibly in a manner analogous to the Leadership in Energy and Environmental Design (LEED) certification process in use today. This could provide a framework for securing smart building systems, and rate facilities by how well their networks are protected in the same way they’re rated for physical safety and environmental standards.

When these standards emerge, technology like BITW and DOME will provide a path for older facilities with an array of automation systems to comply with evolving security requirements—without needing to replace the entire system.

 

This article was edited by Christina Cardoza, Associate Editorial Director for insight.tech.

This article was originally published on April 28, 2021.