Systems Integrators Deliver the Smart Factory

Plant managers are on a mission to improve overall equipment effectiveness and optimize their lean manufacturing approach. Unplanned downtime and product quality are their key concerns, which requires knowing that the machines on the factory floor are running well. If not they face the consequences of reduced productivity and quality, with increased wastage and costs.

Leveraging the power of innovative technologies allows for greater understanding of critical equipment health—eliminating unwelcome surprises. This requires the ability to collect and make sense of the right data at the right time. Predicting and planning for downtime on an identified-needs basis, and not just a calendar, reduces these surprises—the holy grail of smooth operations management.

AI, computer vision, and real-time analytics are making this a possibility, enabling the digital transformation that companies need to be more agile in an ever-growing competitive environment.

Not only do these technologies and solutions create new opportunities for manufacturers, they do so for the system integrators (SIs) that they have come to rely on. But do traditional SIs skilled in daily manufacturing operations have the IoT experience required to deploy these advanced edge-to-cloud IoT platforms?

By working with IoT solutions integrators—which can help source, deploy, and manage advanced edge-to-cloud systems—the answer is a resounding yes.

Partnerships Streamline IoT Projects

One company that’s guiding IoT innovations for its customers is the integrator Arrow Electronics, Inc., a technology provider with a portfolio that spans a broad range of IoT and other solutions. “We’re bringing all the right parts and pieces together, and the skills and expertise from our partners to make something great,” says Andy Smith, director of OT Alliances, EMEA, at Arrow.

For example, with partners like ADLINK, a manufacturing solutions provider, and SAS, a leader in data analytics, Arrow helps SIs bring the latest technology solutions to their customers in the simplest way possible. And Arrow can provide holistic smart factory solutions instead of working under a break-fix, fee-for-service model that can be expensive and inefficient.

“Imagine knowing how many hours a machine could run before requiring maintenance,“ says Smith. “That kind of knowledge allows customers to plan accordingly and ensure backups are ready to keep production humming. Now extend that level of insight to all machines across the entire operation. The impact on productivity and savings is remarkable.”

This kind of predictive maintenance is an obtainable goal with a scalable, end-to-end platform like the ADLINK Edge MCM Quick Start.This advanced solution delivered by Arrow is a prime example of what’s possible when an ecosystem of partners joins together to solve common manufacturing problems (Video 1).

Video 1. AI-enabled predictive maintenance cuts manufacturing costs and increases machine uptime. (Source: ADLINK)

This advanced solution, delivered by Arrow is a prime example of what’s possible when an ecosystem of partners joins together to solve common manufacturing problems.

AI Goes a Long Way

The ADLINK system was designed as a turnkey solution to monitor the health of tooling on CNC machines via vibration detection. And because thousands of consumable parts break and wear down in a factory environment, the same technology can be applied across the whole shop floor.

The solutions edge software ingredients include data aggregation, normalization, alerting, and device management. And SAS Event Stream Processing provides advanced analytics and automated decision-making. This end-to-end combination allows SIs and their customers to build a health score for all those consumable parts, not just CNC tools.

“So this product is just the tip of the spear for leveraging digital transformation to monitor asset and equipment health at scale on the factory floor,” says Daniel Collins, Senior Director of Edge Solutions for ADLINK.

And while using vibration detection to predict failure of CNC machine parts isn’t new, ADLINK’s edge-to-cloud strategy makes its solution unique. “We don’t believe that every piece of data needs to go to the cloud for it to become valuable; that can get expensive fast,” explains Collins.

That’s why data aggregation, normalization, and analytics run at the edge on an Intel® processor-based gateway. And it becomes more powerful when data is sent strategically to and from the cloud—useful for measuring trends across an entire fleet of machines over time.

Even more important, manufacturers don’t need a data scientist to run it. “This solution was built for the operations and plant managers—with a simple GUI that’s easy to use and leverage,” adds Collins.

Smart Factory Savings

One small parts manufacturer was changing cutting tools almost daily to prevent failures and plan for downtime. As a result, these tools were being replaced far ahead of their true failure point—a waste of human and financial resources.

Moreover, because people were an integral part of stopping and starting the machines, errors were inevitable. It wasn’t always obvious when a machine was running the wrong job until it finished, resulting in lost time, scrapped parts, and ruined tooltips.

With Arrow’s help, the company deployed the ADLINK system, which provided the visibility necessary for more accurate asset life cycle analysis based on the machine running each job. Because the solution ties a health score to a likely failure, the manufacturer can now predict with much greater precision when tools need replacing. The results are lower costs and increased uptime.

With the combined expertise of a solutions integrator such as Arrow, an ecosystem of partners, and powerful edge computing, systems integrators can better help their customers benefit from similar results—and generate new revenue streams in the process.

The Many Sides of 3D Imaging

You may not think the duties of a nurse and those of a road crew member would have a lot in common, but the procedures they take to repair a wound or pothole require similar preparation. Traditional ways to measure a 3D object involve low- or no-tech tools, but those methods aren’t accurate and are prone to limitations.

A better way is to use 3D imaging. While the technology isn’t new, previous tools were bulky and expensive. Now, advancements in computer vision have led to the development of smaller and smarter cameras that have greater capabilities.

By combining 3D imaging tools with the latest edge-to-cloud technologies, industries can transform traditional measurement procedures and enhance their results.

“The key factor in repairing any ulcer or burn is depth,” says John Miller, CEO of GPC Systems Ltd., a computer vision solution provider that specializes in 3D, AI, and image analytics. “You could measure around it, but to get the depth you have to use a probe—not very pleasant for the patient.”

Road repairs follow the same pattern. A pothole is nothing more than an ulcer on a road, and road crews have been using metal rods that look like a child’s growth chart to measure depth. “Until now there hasn’t been a cost-effective solution, but using the Intel® RealSense camera has revolutionized the process,” Miller explains.

By combining 3D imaging tools with the latest edge-to-cloud technologies, industries transform traditional measurement procedures, enhancing results.

Object Recognition Improves Efficiencies

Systems integrators that serve the healthcare or infrastructure industries can leverage products such as GPC’s WoundMeasure and HighwayMeasure to solve their clients’ real problems. While the use cases may be different, both offer a consistent return on investment by improving productivity and efficiency. For example, in healthcare, providers can use WoundMeasure to quickly analyze and treat a patient to release a bed (Video 1).

Video 1. 3D imaging enables new applications in health tech. (Source: GPC Solutions)

On roadways, crews can more efficiently make repairs. And the solution offers the ability to predict and perform maintenance, reducing the risk of needing emergency repair work later due to a legal case or claim.

“You can bring down your cost of repairs very quickly if you mount the cameras on cars and have them run alongside your parking lot or road on a regular basis,” says Miller. “You can collect data and start reducing your insurance costs. With the Intel® AI suite, the solution can predict what’s going to happen in the future.”

WoundMeasure and HighwayMeasure demonstrate the versatility of the solution, and they’re just the start. GPC recently leveraged the technology to assist freight logistics companies with its FreightMeasure tool.

“Airlines don’t just carry your suitcases in cargo; they also carry freight,” says Miller. “They could be flying flowers from Holland or live lobsters from Maine. The more efficiently they can fill the space on their planes, the more revenue they can make.”

To create FreightMeasure (Figure 1), GPC mounts cameras to forklifts to measure the different-size boxes and create a plan on how to properly fill the plane cargo containers. Done manually, the process would take too long and cause shipments to miss flights.

Object detection calculates dimensions for optimal placement of boxes stacked on a pallet.
Figure 1. Automatic to maximize packing. (Source: GPC Solutions)

“The solution helps companies quickly move goods in and out of a warehouse increasing productivity and efficiency and making sure they’re not losing any revenue by missing flights,” says Miller, head of business development and partnership for GPC Systems. “The underlying benefit for all of the uses is revenue gain.”

Computer Vision Measures Up

While these solutions sound straightforward, implementing them requires advanced AI. GPC uses Intel RealSense cameras to capture data through handheld methods or by mounting the device to a tablet, forklift, or vehicle.

The camera can be connected to the user’s existing hardware or software systems. GPC’s software is multifunctional, with analytics done on the back end. Data can be recorded in an Excel format or pushed to the cloud for external hosting, where it can be viewed from anywhere at any time.

“Data goes back to a central location where the knowledge base is located,” says Miller. “It can tell them how to make the repair and what treatment to use. It could analyze a piece of tarmac or a leg ulcer. It’s the same principle.”

GPC’s WoundMeasure, HighwayMeasure, and FreightMeasure comprise a single Intel® IoT RFP Ready Kit that systems integrators or end users can grab, download, and go. And depending on the environments or use cases, GPC also offers bespoke development and integration for software configuration.

Having a partnership with Intel® has opened up GPC Solutions to new opportunities. “Whenever we come across a new innovative project or an industry that we haven’t quite broken into, such as our recent success with the logistics solution, we can work together to make it happen for the client,” says Miller. “Being able to utilize our knowledge in 3D dimensioning with Intel’s technologies and capabilities means that we provide an ever-better service and an ever-better product.”

Rethinking Industrial PCs for Industry 4.0

Artificial intelligence (AI) and analytics are transforming manufacturing, ushering in Industry 4.0. Already valued at $1.1 billion, the industrial AI market is expected to reach $16.7 billion by 2026, according to research firm Markets and Markets.

While there’s enormous potential for the technology, there’s also a challenge. Current industrial infrastructure wasn’t built to provide the flexibility necessary to unlock Industry 4.0’s potential. Workloads are traditionally run on siloed devices that can’t store and analyze data.

“Every big innovation or transformation that’s happened in the past couple of hundred years has had a foundational device that enabled it to occur,” says Adam Berniger, Senior Segment Manager for Industrial PCs (lPCs) for the Intel® IoT Group. “For example, it’s been the steam engine, the personal computer, or the smartphone. The pathway technology that’s driving Industry 4.0 is the industrial PC.”

The modern industrial PC brings together IT and operational technology (OT) to analyze volumes of edge data that can be collected from factory machines and turned into business intelligence. To facilitate the digital transformation, manufacturers are turning to systems integrators (SIs) to be the experts who help them tap into the valuable information on the factory floor.

Industrial PCs: The Vital Connectors

This represents a major opportunity for SIs. “The edge is growing in strength and value,” says Berniger. “There are three emerging themes that center around the edge: intelligence at the edge, convergence of the edge, and connection to the edge.”

Intelligence at the edge includes analytics, insights, and AI. “It can drive smartness closer to where things are done in a factory,” says Berniger. “Use cases can include predictive maintenance and analysis, improving operational effectiveness, and boosting productivity.”

How can SIs succeed in Industry 4.0? For @axiomtek, Industrial PCs are the key. via @insightdottech

Convergence of the edge encompasses OT and IT best practices, processes, devices, functions, and applications. Connection to the edge means connecting unconnected devices both to one another and to the cloud.

Berniger likens the potential of the industrial PC to the smartphone. “Five or 10 years ago, if we were going on a trip, we had a ‘dumb’ phone for making calls or sending text messages,” he explains. “We had a camera to take pictures, and we had to print off a map to know where we were going. We had multiple devices for separate functions, but today all of those things are converged onto a phone. The industrial PC is going to be like the smartphone, and you’ll have an app store where you can download hardware and software for many different functions.”

SIs Can Lead Factories to Industry 4.0

SIs have an opportunity to be at the forefront of the transformation, and to bring the industry into the future by providing integrated systems using industrial PCs from Intel® partners, such as Axiomtek.

A good place for SIs to start is by providing solutions that maximize the investment, such as reducing waste, enhancing product quality, or improving factory equipment uptime. “SIs have to make sure that there’s a clear ROI, a path to make the solution cost-effective. They also need to focus on operational metrics, and verify cybersecurity so companies know they’re not opening themselves up to risk,” says Berniger. “Critical projects often involve optimizing production by improving maintenance or quality assurance.”

For example, while Industry 3.0 dictated a set maintenance schedule, such as replacing parts A, B, and C after a set amount of time, Industry 4.0 uses predictive maintenance. By connecting machine parts to edge technology for real-time intelligence, factories can avoid potential production shutdowns.

The latest generation of industrial PCs enables this approach by supporting new levels of intelligence and connectivity. For example, the IPC964-512-FL can be outfitted with an AI card to supplement the on-board Intel® Core processor (Figure 1).

A robotic arm is connected to edge technology that detects deviations and predicts the need for repairs before problems cause downtime.
Figure 1. A robotic arm is connected to edge technology that detects deviations and predicts the need for repairs before problems cause downtime. (Source: Axiomtek)

Another smart solution SIs can provide for industrial PCs is better quality control, enabled by connecting cameras that can replace human eyes for visual inspections. A well-known tire manufacturer, for example, was experiencing an increase in customer complaints from defective tires. Not only was the company’s reputation suffering; it was incurring rising labor and transportation costs from the return of defective tires, cutting into the  bottom line.

The company deployed a solution using computer vision and AI, optimized by the Intel® Distribution of OpenVINO toolkit and running on an IPC based on Intel technology. In just six months, tire returns dropped from 6,000 to a few dozen. Since the solution is highly scalable, the manufacturer has deployed it to additional production lines for inspection tasks.

“Human eyes are not made to look at the same object for eight hours a day, five days a week,” says Berniger. “But a camera connected to an IPC is very good at that (Video 1). When it’s deployed, we see detection rates going from 80 percent to 99.9 percent, with very few false positives. And the speed increases, in some cases cutting inspection time from a minute down to one to five seconds, depending on the job. Machine vision can detect defects and improve quality assurance for everything a factory makes or ships, helping it save money. Waste is very costly.”

Video 1. Industrial PCs can play an important role in quality control by connecting to machine vision. (Source: Axiomtek)

IPCs Bring SIs into the Future, Too

Industrial PCs not only bring manufacturing into the future, they also keep SIs relevant. Gone are the days when SIs could simply bank on having a specific set of capabilities, says Berniger.

“You need to differentiate yourself and add value,” he says. “My boss likens it to two body shops. They may have different levels of craftsmanship, but at the end of the day they both fix the scratch on your car. But if one body shop starts offering value-added activities and solutions, they’re no longer just a body shop; they’re an automobile expert.”

SIs who embrace the capabilities of industrial PCs become the experts that manufacturers need, meeting customers where they are, expanding their capabilities, and becoming their expert guides to Industry 4.0.

Video Analytics and Big Data, Together at Last

A conversation with Chetan Gagil & Lerry Wilson @Intel, @splunk

[podcast player]

What happens when you combine edge AI with metadata analysis in the cloud? The possibilities are endless. Previously unanswerable questions like “Are my factory workers staying safe?” or “How can I reduce losses in my retail chain?” suddenly come into focus.

Join us as we explain how to combine video analytics with virtually any data source to reveal new insights in this conversation with Chetan Gagil from the Edge AI team at Intel, and Lerry Wilson from Splunk, a leader in Big Data analytics. We discuss:

  • Why edge AI is needed for a wide range of industries and use cases
  • How Big Data combines video and other data to reveal hidden patterns
  • How the Intel + Splunk team-up helps developers quickly deploy complex systems
Apple Podcasts  Spotify  Google Podcasts  

Transcript

Chetan: How many people across my entire chain of, let’s say, 7-Elevens that I have deployed the solution in—how many people on an average entered without wearing masks between 5:00 p.m. to 7:00 p.m. on Saturdays? You can ask questions like that. OpenVINO is very good at providing the metadata, and Splunk can provide the time series, or the complex analytics part of it.

Kenton: That was Chetan Gadgil from Intel. And I’m your host, Kenton Williston, the editor-in-chief of insight.tech. A publication of the Intel® Internet of Things Solutions Alliance, insight.tech is your go-to destination for IoT design ideas, solutions, and trends.

Today’s show is all about the convergence between video analytics and big data. I’ll be talking with Chetan, as well as Lerry Wilson from Splunk. Did you know that Splunk was into video? Well, neither did I, so I can’t wait to learn more.

But before we get to that, a quick note that we ran into problems with Chetan’s audio connection, so you’ll hear a big drop in his audio quality halfway through the podcast. But you’ll want to stick around for his technical insights in the second half. With that, let’s get to it!

Lerry, welcome to the show. Really glad to have you here. Could you tell me a little bit about your role at Splunk?

Lerry: Kenton, thank you so much for the opportunity and the invitation. Yeah. I’ve been at Splunk for about four and a half years. I live in what’s called our Global Strategic Alliances Organization, and I specifically focus on what I like to call innovations. An innovation area is where we take a look and identify key technologies that can be brought into Splunk and make us more valuable into new markets. So, really excited to share what we’re doing with Intel.

Kenton: Excellent. Speaking of Intel, Chetan, could you tell me a little bit about your role there?

Chetan: Sure. Thanks, Kenton, for having me here. I’m Chetan Gadgil. I’m part of Intel’s Internet of Things Group. We just call it IOTG in short. Within IOTG I am responsible for what is called as “Edge AI at Scale.” So Edge AI is basically our terminology for our artificial intelligence applications at the Edge, which is not in the cloud—things like IoT use cases.

My job is to apply my technical expertise in deep learning, or artificial intelligence, and help our partners adapt our deep learning technologies in their commercial grade offers. Once that integration is done, then I also work within Intel, as well as with the partners, to help them scale these solutions in different use cases and markets.

Kenton: Very interesting. I want to come back to a little bit of discussion about Edge AI, because, to me, it seems like not necessarily what Splunk is known for. But, first, I want to ask a bigger-picture question. When I think of Splunk, I think about big data expertise and not necessarily video analytics, which is the topic of our conversation today. So, Lerry, can you tell me how in the world did Splunk get together with Intel to tackle this new field?

Lerry: It’s actually a great question, Kenton, because when you talk to people about Splunk, they’re probably used to hearing most often we do all types of data and big data—except video or except imagery. That’s a true statement, because certainly Splunk is focused on machine data, human-readable data, and correlating that with a variety of other data sources. But we don’t look at data from a pixel-by-pixel perspective. How we got together with Intel is really driven by customers. What customers are realizing now in this digital age is that actually a huge part of their big data set is actually imagery.

Again, what Splunk is really good at is taking that metadata and the information around the GPS location—the time at which that image was captured or that video was shot—and we can correlate that and bring that into our system. Now, you add in the additional data that OpenVINO creates from Intel, where you’re now actually creating inferences, and you’re identifying the things and targets that are important to your business. That also becomes a digital signature that can move in to Splunk. Now we’re just bringing the digital side of imagery into our environment, helping you correlate it. So we’re expanding big data to actually include video now.

Kenton: That’s really interesting. So what you’re saying is essentially—answering the question that I was going to ask about Edge AI, which is that Splunk is not so much interested in storing and processing video as much as it is the metadata. Of course, the way you get that rich metadata is through doing analytics at the Edge to ascertain what it is that’s in your field of view. Chetan, I think this would be a great thing for you to speak to a little bit more. What is Intel doing in that area, and what is the OpenVINO thing that we just heard about?

Chetan: Sure. So, I’ll answer the second part of the question first, which is OpenVINO. OpenVINO stands for Open Visual Inference Neural Network Optimization Toolkit. It’s quite a mouthful, but that’s what the acronym stands for. What that means is, basically, if you think about some of the advances in deep learning. As an example, in 2018 AI crossed the threshold of being able to identify human faces more accurately than humans themselves. So, an AI can tell the difference between two faces much more accurately than humans. That was done in 2018.

What happens is these types of AI algorithms, especially computer vision, but many others too—in computer vision you need a lot of processing. In order to be able to do a lot of processing very fast, one solution is obviously you can buy more and more powerful, expensive hardware, but that hardware is typically not very easy to deploy in the field or at the Edge.

OpenVINO actually is able to optimize the complex models and just look at the essential parts, because many of these complex models, the way they’re built is by brute force computing, but the entire model is not always required. So it is able to shrink the model down into the most essential parts and still retain the accuracy that customers require. Then we are able to run it on different types of Intel hardware very, very efficiently.

Everybody, of course, is very familiar with the Intel CPUs. But then within the CPUs there’s what we call as the Integrated GPU. There is another hardware called VPU, or Vision Processing Unit, from Movidius, a company we acquired a few years ago. And new classes of hardware that Intel is now also working on and releasing as we speak that are optimized for AI. Regardless of which target hardware your model needs to run on, OpenVINO provides you that single abstraction layer, so as a customer of Intel you do not have to rewrite your application. You just pick whatever hardware is the right thing for that particular job, and your software will still continue to run with the best possible performance. That’s what OpenVINO does.

The first part of your question was, how do I look at the relevance of something like being able to integrate Splunk, and what benefit that brings to the industry. What I believe is that, say for example, Splunk is one of the most well known, as well as the most powerful tool that I have seen. Especially the types of things it does—being able to adjust data from different sources—a lot of existing integrations of vast ecosystem of experts who already know how to use it and deploy. It means that, from customer point of view, they already have a starting point, if Splunk is already there.

What OpenVINO gives us is that ability to convert different kinds of information that used to be opaque, earlier. So video data just used to be files, NVRs, and things like that—you have to actually have people looking into these video recordings to know what’s happening. But now you don’t need to do that. In real time, OpenVINO can actually turn what used to be opaque data into more transparent metadata, and you can actually then analyze.

For example, you can now start asking questions to Splunk like, “Tell me how many people across my entire chain of, let’s say, 7-Elevens that I have deployed the solution in—how many people on an average entered without wearing masks between 5:00 p.m. to 7:00 p.m. on Saturdays?” You can ask questions like that. OpenVINO is very good at providing the metadata, and Splunk can provide the time series, or the complex analytics part of it. They complement each other very, very well.

Kenton: Yeah, that makes sense. In fact, you mentioned something that I wanted to bring up, which is, I think the use of video has been growing dramatically in the last two years because of these capabilities. You’re talking about the ability to deploy analytics in a much more cost-effective and practical way.

I think that the pandemic has really accelerated that because there are all these new use cases. Things like, are people wearing masks? Are people maintaining social distancing? Even more advanced things, like contact tracing. So I’m wondering where you see the biggest need for video analytics right now, and if you see that changing as a result of the pandemic?

Lerry: I’ll go ahead and start with that response. We’ve seen a huge acceleration of some key trends that have happened during the pandemic. The greatest one, of course, is how do you protect not only the people that are employees or customers, but also the people that manually were sent out to take temperatures and do different types of things.

There’s this unbelievable connection in terms of how does technology help us now in the physical environment that this solution plays really well into. There’s an abundance of use cases, and throughout the rest of the time Chetan and I will talk about some of them. But I think the most important thing to recognize is that video and imagery plays an important part in an entire loop.

That first loop, like I said, is gathering data, or collection of information, or monitoring and observing an environment, and then being able to, again, use the inference models to actually find what you’re looking for very, very quickly and do the analysis with Splunk. But then as you go through the process and you come back out on the other end, we’re still in an environment where we as humans, before we flip a switch or turn something back on, there’s always a visual confirmation that everything has been done correctly.

I think that’s another important role that the infrastructure that people are investing in—the cameras—that they’re looking to do is just that final confirmation, for a human operator to say, “Okay, I see that things were done to assess this issue, and now I can mark this off comfortably and we can turn something back on, or we can continue with whatever process.” I think it’s the capture part of it, the observability, and the continuous piece of it, but also that taking advantage of all the technology that they gather, and then at the end making that confirmation visually of that environment that it’s safe to move forward.

Kenton: So, that’s real interesting, and honestly I’d like to dive into some of those applications. Like I said, I think there’s such a huge range of things that people are looking to do with video these days. I’m really interested to know where you’re seeing—both in terms of the specific applications, and also the industries that are deploying these applications.

Lerry: So, from a Splunk perspective this fits really well into our core area around security. Again, with this pandemic, and even before so, there’s a huge need to marry the cyber and physical data points. So, imagery is a great way to help secure—be that surveillance engine, be that observer—and that information is absolutely critical. I think that’s a foundational use case that we share already with Intel, the OpenVINO environment, and many of our customers. That’s truly horizontal. Then I think you can look across verticals, and you can start to see where other things are going in.

So, a specific industry—such as retail or manufacturing—where you’re looking at something being built and trying to make sure that the quality control measures are being followed. The combination of seeing and visualizing that information and being able to measure it against KPIs that you have for those processes—that’s really the power of this joint combination. Chetan, you want to talk a little bit about some of the other vertical use cases you guys are involved in and focus on?

Chetan: Lerry mentioned about the common use cases around security. At Intel, we also have a similar point of view in terms of the market opportunity. We call it situational monitoring, and it cuts across multiple verticals. It is not just limited to computer vision. Audio is another big use case—being able to detect audio signals and then be able to figure out what’s happening on metadata that can be generated out of classifying these audio signals. That’s another common use case.

In terms of the situational monitoring, it really has different applications in many, many industries. So, for example, in retail, loss prevention is a big use case for retail customers to be able to operate there—stores with maximum transparency and efficiency. In manufacturing, it happens to be in terms of worker movements. Worker safety is another common concern from customers that they can solve very effectively with real-time computer vision.

Another category of use cases that we are prioritizing is what we call as product inspection. Traditionally it was done visually—visual inspection, manual inspection. But now with computer vision, instead of just sampling and taking a random sample of some products, you can pretty much analyze every single product with AI, because all it takes is just more compute power. You don’t need humans to make those decisions of the quality aspect.

Another cool thing about this is the fact that with product inspection, it doesn’t have to be limited to just visual spectrum. X-rays can be analyzed, and so on, in terms of manufacturing. Another area where we are seeing a big demand from customers is analysis of medical images and such: MRIs and CT scans. Things like that. Even in these areas there are many types of analysis that computers are doing a much better job than experienced radiologists. So that’s another area where it is not as mature as some of the other things, but it is definitely getting very more mature, and they are picking up in demand.

Kenton: One thing I think is particularly interesting about all of these applications—and we’ve talked about this a little bit already—is the fact that, like you said, it’s more than just vision, in the sense of putting a camera that’s looking at the visual spectrum. It could be x-rays, but there also could be audio. There could be other things, like we mentioned at the top of the call, related to the positioning of things—GPS, or what have you.

So, I’m wondering, and I think this is probably a question for Lerry, if you could tell me a little bit more about how you are integrating these various data sources and drawing insights from a multimedia, if you will, data set?

Lerry: Absolutely. That’s what we get very, very excited about, Kenton. Again, we see this as very powerful data sources that, when combined with other types of things, will open up whole new insights and automation capabilities for our customers. What really makes Splunk unique in this area, as Chetan mentioned earlier, is our ability to quickly bring raw data streams in—any digital data, time-stamp, human-readable—and then search.

Splunk is really a search engine for machine data. So as you bring these other data sources in, and you start to query the data and information, you quickly can build visualizations or actions around the results of those searches that then continuously stream, and are made available, and can be pushed out to a wide variety of users. Everybody is looking at the same information to make the same decisions, and then you could even automate beyond that with different types of playbook.

So that unique capability to scale to incredible amounts of data and then correlate that and make it easier for analysts to investigate and understand. And now you add the capability of, “Hey, we’ve actually found things that you’re looking for” that OpenVINO brings into the environment. Now you can run back from that point and figure out—how did we get to that fault, or to that item, or to that area of interest, and what can we do to improve or automate around that?

Kenton: Interesting. So, Chetan, this makes me want to come back to you. What I’m hearing from Lerry is in the big data side you’re correlating and combining all these different types of data, but I’d love to hear a little more about on the Edge side. I think you were talking earlier about how OpenVINO compresses things down in terms of making the AI algorithms manageable at the Edge.

I think a big part of what you’re talking about there is the idea—to get a little technical—that we’re talking about mostly pushing inferencing to the Edge. So, I think across the system holistically a lot of what’s happening here that makes this work is putting the right kind of intelligence in the right place, and the right kind of data in the right place. Would you agree with that?

Chetan: You’re talking about exactly the right types of issues that customers have. Number one, moving data around is very expensive, so it is always better to analyze anything that happens close to where it happens, rather than trying to move it halfway across the world and get it analyzed in the cloud.

One example—just take a standard store. So, the pharmacy that I go to, it has probably got about 40, 50 cameras in a small area—and just that one pharmacy. Imagine a moderate-sized city: you’ll have millions of cameras, and there is no cloud in the world that can take all these camera feeds and analyze everything, move the data to the cloud. First of all, it’s going to be extremely expensive and, second of all it’s not even going to work.

So, having the analysis happen right where the data is being produced—so where the cameras are—it is going to be a lot more efficient. That’s the power of being able to shrink down the models, being able to provide more Edge-processing capabilities. We believe that over the next 10 to 15 years when this technology matures enough, 80%-90% of AI inferencing will actually happen at or near the Edge, and not in the cloud.

Kenton: That makes a lot of sense, and I think in particular if I extend these considerations, not only the practicalities of moving data around, but also the privacy concerns related to that. Inherently, if you’re not transmitting information that has any personally identifiable sort of data, in that it helps protect privacy. But, of course, there have been a lot of concerns throughout 2020 that people are starting to have about privacy, and about how public-facing video cameras are capturing their data. I wonder if you could speak a little bit to how Intel, how Splunk, are addressing these privacy concerns?

Chetan: Yeah, that’s a great question actually. There are three things that happen when you process information close to the Edge, where the cameras are. Number one, you’re not moving data where it doesn’t need to be there—for example, if data doesn’t need to be copied to a cloud where it can be hacked. You’re not opening up more surface area for attack of the data that is supposed to be privatized—number one.

Second thing is that, with AI inferencing, it’s better than humans taking a look at what other humans are doing, because inherently if an AI is looking at something it is private. Basically, an algorithm is looking at what’s happening. It’s not some human who is sitting in some security office and observing other people and how they are behaving. That’s another area.

In addition to that, the way most of these algorithms work, they work on these visual features. When people talk about facial recognition and use cases like that, the way it works is that they just classify a set of features, and it just happens to match to one person in that particular case. It’s not like a computer really knows who that person is—unless you physically map that person’s information with what the algorithm is doing. That’s the second protection that AI solutions bring in terms of the privacy.

Third, is it actually goes to the application there, and this is where Lerry can add some of his context as well. The application also then has to still do its job, which is making sure that all the other bits and pieces of correlated information that we’re managing—you’re only keeping and retaining at the minimal required bits, and not allowing situations, where if there is an attack or any information being stolen, it does not really compromise things that are not meant to be disclosed. For example, social security numbers, those types of things—at the Edge, nobody even stores those things. So there’s not even any opportunity for any attackers to steal that information if you have trust in that Edge AI.

Lerry: Kenton, that’s a great question, and it’s something that Splunk takes very, very seriously. Just to give you some context, when the COVID-19 hit we actually talked to about 200 senior executives, and without a doubt the number-one issue was regarding any type of data—whether imagery data or even machine data, IoT sensor data, any type of that data—the number-one issue was around privacy. Splunk’s roots are in security, and we understand that.

First and foremost, you have to recognize these companies are working within regional different types of requirements in terms of how data gets handled and how they manage it. Then it’s up to them to really manage that. How Splunk really helps them is making sure that they know that the data is theirs. They own the data. They are responsible for that data. We produce that data. In terms of exposing that data, we work with our customers very closely in terms of who sees what at what point, because at most times it is just data.

It’s not information about people—it’s not that information. It’s really just the data pieces that are going through. But at some point, if you’re responding to an incident based on the requirements that the customer has set in place and the protections that they’ve set in place, you want to be able to dive into that information and understand where the sources are and be able to manage it. In a physical environment that becomes a very, very sensitive issue that our customers are very, very attuned to.

Kenton: I’m really glad to hear from both sides these issues being taken so seriously. Like I said, this year has been a lot in the headlines, and I’m glad to see both Splunk and Intel really taking the right steps, I think, to address these concerns. The topic I’d like to close on is probably the most important topic, which are the dollars and cents.

As we’ve been talking about with the pandemic, there’s been a real acceleration of video use cases, and real need for all kinds of new technologies and techniques to ensure safety and security and the health of the public and employees and all the rest. But, at the same time, the pandemic has also put a real crunch on a lot of organizations’ budgets. So, I wonder if you can to talk to me about how these things can be deployed in a cost-effective manner that has a very quick ROI in these budget-constrained times?

Lerry: In that same discussion that we had with all of these senior executives, this was exactly the point that they brought to our attention as well—is, “Look, we are very constrained. If we’re fortunate, we’ll break even on our budgets for a little while.” The important thing about what we’re doing together is recognizing that a lot of the investment and infrastructure has already been made. So we’re just helping them, in many ways, leverage better use of the investments that they’ve already put into camera infrastructures—many camera-rich environments. I don’t have any exact numbers, but I think we can all agree there’s a significant amount of video that never gets looked at because of the human cost associated with that.

If we could actually demonstrate and show them the capabilities that we jointly have produced, that could be a great way for them to leverage that previous investment. I think the most important thing is recognizing that your digital journey is indeed a journey, and leveraging the data and the infrastructures you already have is where our customers are going, and being very smart about recognizing that—make these investments now, relatively small to the investments they’ve already made, and start to correlate these different silos of data. Then they’ll be much more efficient and ready to optimize and keep further investments as they move forward.

Chetan: Yeah, I agree with that, Lerry. We’re also seeing the same thing that most customers—they are looking at being able to leverage their existing investments rather than put in new investments, and how do you do that? With this type of integration, if you look at the install base of Splunk—which is vast—if you look at the install base of Intel hardware, it is also quite vast.

So, being able to then get the benefit of that to the customers by saying, “Okay, you don’t have to start by replacing or buying completely new infrastructure and all of that from day one.” You can start gradually by adding new capabilities to existing infrastructure, and the way you do that is by taking OpenVINO as the foundational AI element so you can actually leverage hardware to the best possible extent—leverage software investments, leverage expertise—that many of these customers already have Splunk. And then being able to integrate those two.

Then showing them how to create new solutions as the needs of the businesses evolve. As you pointed out, right now we’re in a stage where everything has slowed down. Mostly business…the least amount of destruction required, that’s what they’re looking at. But this is turning around. In 2021, as we start seeing more business turning around and starting to look at newer use cases, being able to create new experiences for their customers. Because one thing that the pandemic has changed is that we’ve realized now that new experiences are needed in many types of scenarios. So how do you do that? Now we are pretty well set in showing them a path to getting there.

Kenton: Yeah, absolutely. So—love to end on a positive note. Let me just give you folks some opportunity if there’s anything we haven’t touched on already that you’d like to add to the conversation.

Chetan: On an ending note, one thing I could say here is artificial intelligence is really a game-changing technology. What this opens up is a new set of opportunities. I mentioned computer vision is the most obvious one, or we can say the most visible one, by definition.

On the other hand, if you start looking at how these things are being applied in different ways, combined integrated and applied, one use case I did not mention was what we call control optimization and autonomy—meaning autonomous-mobile robots—and being able to combine the control aspects—like how something behaves in conjunction with what it observes. That’s pretty much what humans do.

Those types of integrations are becoming very, very required now, because then you can send robots into dangerous areas, or distributing medicines in a quarantined location. Use cases like that, which we never would have imagined a year ago, are now being talked about. That’s one big trend in terms of how people are thinking about these applications.

Second things is, how do you improve the ability of software or hardware manufacturers to create these types of applications? We have to break down the barriers. Instead of us having to spend years of development cycles, can we bring it down to months, or weeks, or whatever? What is the way to do that? What are some other development tools we can provide? That’s the second part—opportunity for the industry.

The third part is, how do you do it very, very cost effectively? I think your point [inaudible] customers, they’ll always have budget constraints, because it’s the way an opportunity costs at the very least. If you’re spending that “X” here, you don’t have that “X” to spend somewhere else. So how do we provide that range of different options for these customers to be able to deploy the solutions for their needs? Any solution should be able to not only scale up, but should be able to scale down to the requirement of the smallest customer.

Lerry: Chetan, that’s great. I think, in closing, the thing hopefully we conveyed today is the excitement and the opportunity around the technologies that we’ve combined. But how we get these into market—how do we make these use cases come alive? Intel and Splunk together are going to be relying heavily on a phenomenal ecosystem. Chetan referenced those manufacturers that will be able to provide boxes that are preconfigured to set these systems up into a customer environment.

It’s really also important that our systems integrators and the development community really embrace OpenVINO and Splunk together, and start to understand what is capable, and how there is an opportunity for them to play in this extremely growing market, and really come into an area that has not been explored. We’re working very diligently—and very excited to bring in the entire ecosystem from the developer’s side, from the system integrator, and the go-to-market side, to make sure that the customers not only get this great technology, but they also get it implemented correctly, they get it operating efficiently, and they can realize its value as quickly as possible.

Kenton: That’s great. Well, that just leaves me to say thanks to both of you for joining. So, thank you, Lerry, so much for being with us today.

Lerry: Thank you very much, Kenton. Appreciate it.

Kenton: And, Chetan, much the same to you. Thanks so much for your really valuable insights.

Chetan: Yeah. Thanks, Kenton. Thanks for having me here.

Kenton: And thanks to our listeners for joining us. If you enjoyed listening, please support us by subscribing and rating us on your favorite podcast app. This has been the IoT Chat podcast. We’ll be back next time with more ideas from industry leaders at the forefront of IoT design.

Keys to SI Success in 2021

A conversation with Tom Digsby @Tech_Data

[podcast player]

Let’s face it: 2020 was a tough year. But systems integrators have a lot to look forward to in 2021. The tech industry’s rapid response to the global pandemic means that SIs can now offer their customers a wide array of return-to-work solutions. And as life gets back to normal, those same customers will be eager to continue the digital transformation efforts that were accelerated by the crisis.

Join us as we explore the opportunities with Tom Digsby, Senior Manager of the IoT and Data Solutions Group at Tech Data, a leading global distributor. We discuss:

  • How SIs can position themselves for rapid growth in 2021
  • How to quickly onboard complex new technologies like AI
  • Why distributors are evolving into aggregators, and what this means for SIs
Apple Podcasts  Spotify  Google Podcasts  

 

For more on this topic, read IoT Solution Factory Builds System Integrator Growth.

 

Transcript

Tom Digsby: “What is your killer feature? Why would I buy it from you versus partner X down the street?”

Kenton Williston: That was Thomas Digsby, an IoT expert from Tech Data, and I’m your host Kenton Williston, editor-in-chief of insight.tech. I’ll be talking with Thomas today about the ways systems integrators can find success in these uncertain times. Thomas, welcome to the show. What’s your role at Tech Data?

Tom Digsby: Sure. Hi, my name is Tom Digsby. I’m senior manager of the IoT and Data Solutions Group. I manage a group of vertical consultants, and technical consultants, and what our team does, is interface with partners. And what I mean, partners, is resellers. So, what we do is, we help from a vertical perspective, our resellers to understand IoT, and in a vertical context. So, it would be… Our focus is on healthcare, it’s on smart cities, it’s on industrial manufacturing and it’s also retail and commercial.

And we kind of have a little bit of a horizontal as well called smart buildings or smart spaces. Tech Data is a solutions aggregator, and we have a lot of value rather than just ordering from us. We have a solutions aggregator value where we have Practice Builder methodology. They help partners take a solution to market. We show them how to do that and take a solution to market in less than 90 days. We have the ability to bring new technology updates for solutions. We have prepackaged solutions that you can take to the market today. And we also have a vast ecosystem of partners that we can marry up different partners with.

Kenton Williston: That’s all really great. And I think there’s some really important things that you mentioned there. So, the first thing that stands out to me and in what you said just now is, the markets that you serve, including things like smart buildings. And of course, it’s not a surprise to anybody that building occupancy and all kinds of other things have been seriously impacted by the pandemic. And so, systems integrators, I think, regardless of what market they’re serving, are being asked to very rapidly deliver all kinds of new solutions, fever checking, and contact tracing and mask compliance, and all this kind of stuff on a very, very quick turn basis. So, I’m wondering from your perspective, how this has been impacting the business that systems integrators have, and where you see things trending going into next year.

Tom Digsby: Yeah. Great question. We have seen end users place their purchasing projects on hold because of the pandemic and for reserving and preserving their capital that they may need. The systems integrators have really had to shift their focus and we’ve helped do some education around COVID and the return to work, right? So, as a lot of these buildings are empty now, there has to be some safe measures put in place, and we have some solutions. We have about 20, what we call COVID return to work solutions. And these are focused on, for example, I’ll give you a few of them.

Temperature pre-screening, right? Someone comes into the building, you can grab their temperature and make sure they’re good. Telehealth, so, having a virtual conference with your doctor, air quality monitoring, social distancing, alerting as well, digital signage is another one that’s pretty critical for information going into a building that people need. I think when we’re looking at 2021, those are really going to be some of the really focused areas to look at when you’re a systems integrator. Those kinds of solutions make people feel safe. And when people feel safe, they kind of return to normalcy that we knew before the pandemic. It will take us a good portion. I would say the first half of next year, talking to companies that are returning to work and need to put some of these safeguards in place.

Kenton Williston: So it sounds like what I’m hearing is systems integrators should not expect to return back to their previous lines of business in the near future. That they’re still going to be at a lot of focus on the return to work and health and safety sort of elements. And that it’ll be strategically wise to plan for that going into next year.

Tom Digsby: Yeah, absolutely. I mean, there’s still going to be a little bit of integration work. The finishing up the projects that they have in flight already, but as new opportunities in new ways of returning to work, for instance, we had a partner approach us and said, “Hey, we need to look at the office as kind of like a hotel for the desks. I want to make sure that we’re cleaning the desk. I want to make sure that there’s not more than six people in this one area, and there’s not more than X number of people on the floor. I think he called it desk hoteling, if you will. So, those are… That’s an example of returning to work and some of the things that can be put into place technology-wise and plan for now, so that you can be ahead of the curve and actually talking and selling those kinds of systems.

Kenton Williston: Makes sense. And what about the role that systems integrators play in a solutions landscape? So, what I mean by this is, there is an ecosystem, it starts with the technology providers, let’s call them OEMs, ODMs, those sort of folks. Distributors like Tech Data, then take those solutions and deliver them to systems integrators who in turn work with their end customers. Do you see that food chain, so to speak, evolving in any way, either because of the response to the pandemic or for other reasons?

Tom Digsby: Yeah, I think we at Tech Data take our solutions aggregator role very seriously, and we think it’s more critical, and we have been more involved, more and more bringing solutions to product market rather than just buying and shipping a product, right? Our expertise is vetting the vendors, understanding the solutions that can be aggregated and brought to market. We have what we call an IoT solutions catalog. We can dive into that. We have probably 60 or so solutions in the catalog now that are by-vertical. We view it, and look at the landscaping changing a little bit, as we all know, no single vendor OEM or partner can deliver everything that one end user needs, right? We have an extensive ecosystem partnership with a lot of different types of skilled partners. They could be someone that implements or goes and does an assessment at a client site.

We can connect that partner to a person that’s trying to pitch the value to an end user, or we can also connect them with… When you’re looking at the solution, it has to have a business outcome. And if the solution doesn’t have a business outcome, it’s a science project and no one’s buying a science project. It has to have the business outcome. So the business outcome is usually analytics or AI driven. So we have ecosystems of partners that can help with that. So our value, when we’re looking at it from a distributor to how to help the changing market, especially when it comes to IoT and data solutions, is not only providing the technology, giving you a blueprint for how to deliver the solutions in our catalog, or you can bring your own and we’ll help you with that. And then helping you drive the business value through the analytics side of the house.

Kenton Williston: So that’s all really powerful. One thing actually has stood out more than anything. In a way it was just said it was a single word, which was the word aggregator. What in the world is an aggregator? How does that differ from a distributor?

Tom Digsby: Oh, that’s great. So, distributor has relationships with a lot of different vendors, right? And their primary role is to buy a product and make a little bit of margin, and ship the product. We have a solutions specialty practice. There’s four of them, four distinct solutions, especially practices that we’ve invested a lot of money into from Tech Data. One is security, one is cloud analytics and IoT are together, as you can imagine. So, those four specialty solutions practices are all focused on how do we deliver the value? How do we educate our partners, so that they can sell solutions rather than just ordering a point product? A lot of times when we get a call from an end user, sorry, a partner, they’ll say, “Hey, I need 16 tablets or 160 tablets.” And you’ll say, “Oh, well, what are you going to use that for?”

Tom Digsby: “Oh, we have a blah, blah, blah, solution that we’re going to take to market.” And you start digging into it. And it becomes part of a bill of materials that someone’s needing to fulfill a technology need, a business outcome need. So, we dug a little bit more and what it was, was they were wanting the tablets to be able to access the information for the manufacturing floor. So, well, we dug a little bit more and discovered that, “Hey, you need some sensors, you need some more information.”

And we basically cobbled together a solution for them as part of the business outcome to say, “Oh, you need to centralize this information. You need to be able to deliver it in this way. And you need to be able to see the data on those 260 screens, right? That you just wanted to order.” They were so appreciative that that’s kind of what spawned and helped the Practice Builder. The Practice Builder has been around for about 10 years for protect data. And we’re doing it in each one of those disciplines. Security, cloud, IoT, and analytics. So, our value is teaching our partners how to solution sell. What is that business value for the end user, not just buying product and shipping it.

Kenton Williston: So, I take it, this Practice Builder service you’re talking about, it sounds like there’s a whole toolkit there to help systems integrators succeed. Can you explain to me what exactly this Practice Builder service is?

Tom Digsby: Oh yeah. Yeah, sure. So, it’s two parts. One is we teach the value of what the Practice Builder is, what’s the timeline. We can help a partner take a solution to market. Whether it’s ours or theirs, and in less than 90 days, and when I say take it to market, everything is included. We know what your marketing looks like. We know what your operations within your business looks like. We know how much money that you’re going to need to supply from how do I sell this based on some business assumptions that we make.

We have a business tool that we use and what that simulator does, it sits down and understands the pricing, the cost of goods, your additive services, your third-party integration, your OEM price from all the different vendors that you’re aggregating. And it’s a one place to look at the whole picture of, “Hey, what does this cost and how much margin can I make out of it?” All in one place. So the tool and the process work together, hand in hand, and we’ve actually won some awards for the Practice Builder.

Kenton Williston: Wow. Well, that makes sense. I mean, it just sounds tremendously useful and thinking, particularly there’s two things that come to mind for me. So, one is just the aspect of how no one company is going to be the expert at everything. So, it sounds like a big part of what you’re doing is bringing to bear the technologies and expertise from a lot of different sources. So, a systems integrator can focus on their own specialized expertise, but be able to leverage the best in class for all the other areas that they’re not necessarily experts in and don’t want it or need to be experts in.

Tom Digsby: I think that’s a great summary. Absolutely.

Kenton Williston: Yeah. And then the other thing, that I think it’s really important about this to me, is just as the technology has progressed and you’ve mentioned AI a couple of times already, so, that’s a perfect example of a technology that can be incredibly complicated and doing things like detecting fevers, or contact tracing or anything like that. I mean, these are the sorts of applications that are challenging to develop. So, having a complete solution that can not only execute these kinds of tasks, but also integrate with a larger analytics framework to deliver business value. This is very, very complicated. So, having access to these sort of more complete end-to-end solutions, I imagine, is tremendously valuable.

Tom Digsby: Yeah, our partners are very appreciative for the value and all we ask them in exchange is, “Hey, we’re going to teach you this methodology, award-winning program, and all we ask you to do is, source the equipment from us, the software, the hardware, the things that are needed to put the solutions together. So that’s all we’re asking in return.”

Kenton Williston: A fair trade. The other thing that I have a question about here is, we’ve been talking about how Tech Data can source technologies from a lot of different providers, and how it’s extremely valuable, especially in these times where there’s a lot of uncertainty in the market has shifted very rapidly to have a ready-to-roll, end-to-end solution. But I imagine there’s still a lot of customization needed to address the particular set of customer needs. So, can you talk to me a little bit about how you worked with systems integrators to custom tailor these technologies for their end customers?

Tom Digsby: Oh, sure. So let me start out by going through a little bit of the Solutions Factory process. So, as we bring a solution, what we think is a great solution package together, we bring it through our Solutions Factory process, and that’s where we vet the vertical industry, the aggregation of the technology. We make sure that the business outcome is there, has it been deployed and really what’s the ROI? Because if there’s not an ROI, no one’s going to buy it. So, our solution factory processes, it really looks at a solution before we even bring it and announce it to our partners.

And we also have baked in the ability to roll out the solution with about 80% of it baked. If we look at… Let’s take equipment manufacturing, for example, okay? You can take the equipment manufacturing, you can say, I need sensors or, you know what? I’m going to talk to the equipment directly through the PLC card.

We leave that flexibility and so that, you may need to add some sensors and talk to the PLC card at the same time. So, how can you tweak the solution so that it fits the end user that you’re actually talking to? If it’s a, let’s say it’s a smart parking solution, the number of cameras, they may have existing cameras, we may need to add 20% more cameras. So, having the ability to aggregate all of that and be able to leverage existing equipment or adding new equipment, the business outcome is what’s driving the need for the solution. The flexibility and tweaking and modifying is what the systems integrators really appreciate when we’re talking about an end-to-end solution like that.

Kenton Williston: That totally makes sense. But I’m wondering about the other piece, which I touched on briefly earlier, which is the way different systems integrators have different areas of expertise. I’m betting that part of what you do is not just bringing together these technologies from different sources, but I’m betting you also help systems integrators find each other and other kinds of service providers to fill in wherever they don’t have the right expertise. Am I guessing correctly?

Tom Digsby: Oh, absolutely. Absolutely. So, when I talked to people about our different types of partners, or skills of different partners, I often draw out three circles. One on the left, one on the middle of the one on the right. And one on the left. I usually talk about implementation and assessments. So, if you need to go out and assess an environment for where should the cameras be placed, how many cameras need to be placed? How far apart are the cameras that need to be capturing the information? How are you going to aggregate that camera data? How many gateways do you need? Do you need switches? What kind of equipment is in place today? They may have a vendor preference that we need to take in consideration when we’re looking at that. So that’s the implementation or and assessment group. The resellers are, “Hey, I have an opportunity. I’m really good at creating demand. I can get face-to-face with a customer and I need things to sell.”

So, we have over 60 solutions in our catalog that are vertically focused, right? For IoT and analytics outcomes. So, those are the VARs. Those are the folks that are really good at selling and identifying opportunities, and then matching up the technology with what it is the client needs. And then on the right hand side, the third circle, is all about that business outcome. So, what is it that we need to capture? How is it that we need to capture it? It could be dashboards. It could be video feeds. It could be learning from the video itself and doing some AI interpretation of it.

It could be machine learning. So the analytics and AI and machine learning is a different group set of partners that we have. So, assessments, what I call reseller bars and analytics, AI type partners. We can cross-match. So, if an organization is looking for a skill set in any one of those three, that doesn’t have it, we have a vast ecosystem and contracts with the partners that can deliver those kinds of services. So, it’s really just a matter of a little bit of speed dating, and introducing them and saying, “Hey, guess what you’re doing? You have a need for that.”

Kenton Williston: Exactly. Exactly. Yeah. So, well, I’m wondering if I can add another access to your three circles chart here. Because we talked about sort of the, how people could be segmented in terms of their business focus. Whether there are VAR, for example, or something else, but there’s another axis that’s gotten a lot of attention over the last couple of years, which is how the sort of operational technology machine sensors or security cameras or whatever, is there any convert or more with information technology that sort of classic data center, cloud, all that kind of stuff. So, you do see that as really happening? And if so, how is it impacting systems integrators that are coming from each of those backgrounds?

Tom Digsby: Yeah, sure. So, we call it OT and IT. So, it’s the convergence of that, right? So, we look at OT and IT bringing the power to the customers, kind of what the phrase we use. That’s to say that, they can deliver the control to the customer. The IT team now has sensor and machine data, and can now look at normal processes and systems rigor and safety from a security perspective that the IT folks already have.

So integrating the OT and the IT together and operationalizing that, is very much needs that the type of partners that we have as well, to interface with them. So, when you look at the OT management aspect, they really reap the benefits of the technology that comes from IT. And when we’re talking about efficiency, knowledge, we’re looking at being able to schedule maintenance without impacting the bottom line, because now they have more real-time information from sensors and cameras and such. So, bringing that operational side of the house to bear here, is really a skill that can be leveraged almost immediately with all kinds of things that we talked about already.

Kenton Williston: So I think there, I would argue that a lot of what you’ve just described, is what I would consider to be digital transformation. So I’m wondering how you see that concept of factoring in here, especially because there’s been a lot of talk about the pandemic accelerating digital transformation.

Tom Digsby: Yeah. Yeah. I think you’re right. I think it is a layering kind of process. Digital transformation is definitely… It’s a multi-step process. And when you’re looking at improving the ability to talk through the equipment, or learn from the equipment, get the data from the equipment and then be able to autonomically monitor the plant efficiencies, for example, in a manufacturing environment. Once you have that, all kinds of things open up. When you have that base level of automation, that some plants don’t have right now, you can gain efficiencies, but more importantly, you can also create a revenue growth. And when I say revenue growth, meaning, if you have certain machine data, and you’ve gathered it over time, now that you’ve transformed your environment, you can actually monetize some of that data and put it into data sets. And you can actually offer that as a different revenue stream for the same kind of industry that the partner, the end user is in.

So, the end user can do that, or the SI can do that on their behalf. And then co-market the ability to take that data that’s learned, and sell it as a service. So, one can monetize the data after you’ve gone through a digital transformation. So, what I mean by that is, if you have a certain kind of machines, let’s say they’re more on the manufacturing floors, you’ve got algorithms, you’ve got things built in, you’ve got this kind of equipment. If someone has the same kind of equipment, they can learn from that and they will pay you for that. So they don’t have the pain and painstaking process to go through it themselves. And if they know what kind of dashboarding and operations that you’re looking at, and they can just tweak the information and the algorithms to make things better for their machine, they’ll actually pay for that.

Kenton Williston: Interesting. And that leads me to something else I wanted to ask you about, which is just the concept of agility. So, I think it’s fair to point out that we’re in some very uncertain times, obviously the pandemic created a lot of uncertainty, but there so much more. As we’re recording this, the outcome of the election is unclear. It’s kind of like everywhere you look, there’s just lots of question marks. And I think, persistence of integrators to succeed in this kind of environment, having a lot of agility is really important. And I’m wondering what you see as the keys to succeeding in this environment. Is it really about having differentiated offerings being able to separate yourself from the pack somehow, is it having access to these 90-day deployment kind of schedules? What’s kind of the key to success here?

Tom Digsby: Yeah. It’s a great segue into what I coach our partners on our Practice Builder. So, as part of that, you hit on a few topics there, having the right technology, having the right value prop. As part of our Practice Builder, one of the things we really home in on, and make the partner discern themselves is what we call, “What is your killer feature? Why would I buy it from you versus partner X down the street?” So, having that differentiation, if you’ve got 16 years of manufacturing experience, people want to know that. So, in your marketing materials, in your differentiation, you’ve got to be able to say that. And we capture that and we hone it even to a finer point in the Practice Builder. When we look at that, the kind of the Practice Builder accelerates the guesswork. You do it all on paper.

And what we’re doing is looking at the repeatable solutions because no one wants a one-off solution. You want to be able as a reseller or a systems integrator to say, “Hey, I could sell at least 80% of this over and over and over, right?” So, that’s what we call a repeatable solution. When you have the repeatable solution and all the profile and all the aspects of taking the services piece to actually deliver it.

We actually have the services piece, the cloud consumption, the hardware, any third-party assessments, all of that thing gets modeled in the Practice Builder in the business simulator. So, it kind of takes the guesswork out of it. So, if you can do it virtually and sit down with us and do that, it takes a lot of guesswork, but it also honed your message to a very sharp point, almost like a dart going into a dartboard. It takes the guesswork out of, well, what would we do here and really captures on your skills as a systems integrators. What can you deliver on that business value to the market?

Kenton Williston: Makes sense. And this leads me to ask, what Tech Data itself is doing to continue improving your value proposition and in particular, how you’re leveraging your relationship with Intel.

Tom Digsby: That’s a great question. I mean, there’s a lot of under-predictability, just like you said, right? We’re always looking at our role as an IoT solutions aggregator by gaining the insights from the vendors like Intel and the suppliers that we buy from. We look to strengthen our knowledge. We were having knowledge transfer the other day of how edge processing and what the software from Intel looks like. So, OpenVINO was one of our conversations. So, we were actually looking at some of the technologies and transferring.

And as I mentioned, I have some technical resources on the team. They were looking at it and going, “Wow, we could apply that here. And we could apply this here.” So, as our partners start to come to us, we share that knowledge and say, “You have a solutions that’s architected this way. Have you thought of replacing it or looking at it in this way, or a different way of bringing the value from the different vendors like Intel and looking at it from also from the industry perspective and looking at it from what are the AI analytics look like? What’s the processing for machine automation look like?

Tom Digsby: So, as you’re looking at it, we’re working with Intel to make sure that we’re identifying the solutions, and we’re map mapping that with the problems and the business outcomes from the catalog that we have, so that they can leverage the technology and our expertise and can really go to market. We support our partners in that way, and they appreciate our value.

Kenton Williston: I’m sure. I’m sure. So, we’re getting close to the end of our time. I’m just wondering if there is anything I’ve overlooked you wish I had asked you?

Tom Digsby: Well, I think if partners come to us with an idea, even it doesn’t have to be one of our solutions. As I mentioned that we have our solutions catalog. We have a Practice Builder. If you bring us a solution, we can still work with you. Just last week, we had a partner bring a solution to us that revolved around SAP environment. And I was like, “Oh yeah, we can absolutely do the same kind of methodology and same Practice Builder.”

And we did a Practice Builder with them last week. And so, it doesn’t have to be our technology. If you have a solution that you want to bring to market, and it has distinct business value, and some will actually buy it and you’ve implemented it, or need to take it to market in a repeatable fashion, we’ll work with you. So my little five-minute spiel here is about how we can help you take what you want to go to market with. That’s our value from Tech Data.

Kenton Williston: Wonderful. Well, with that, I’ll just say thanks so much, Thomas, for joining us today. Really enjoyed talking to you, and I’m sure our audience will enjoy this conversation as well.

Tom Digsby: Yeah, thanks for the time, Kenton. Appreciate it.

Kenton Williston: And thanks to our listeners for joining us. If you enjoy listening, please support us by subscribing and rating us on your favorite podcast app. This has been IoT Chat Podcast. We’ll be back next time with more ideas from industry leaders at the forefront of IoT design.

Microgrids Power the Renewables Revolution

What would it take to minimize the use of carbon-based energy and maximize renewables across an entire community? You might start with microgrid technology. And that’s what one Caribbean island did—with beneficial results across a spectrum of metrics.

Recognizing the need to generate electricity from renewable sources instead of fossil fuels, the island’s owners and power system operator implemented an astounding transition: moving the island from 100 percent diesel power to greater than 90 percent renewables with microgrid technology. Its hybrid wind- and solar-powered electricity has radically improved sustainability, cost, and overall reliability.

Microgrids offer the possibility to accelerate the conversion to renewable energy by letting users mix and match distributed resources like wind, solar, storage, and hydropowered technologies.

The conversion didn’t come without challenges. For one, integrating different energy technologies and equipment can be complex and costly because they are not designed to work together. For example, something as “simple” as adding a battery to a photovoltaic (PV) system requires sophisticated control algorithms typically implemented by a PLC programmer or SCADA developer to custom-engineer the overall solution.

The project required the integration of incompatible equipment from numerous manufacturers, such as wind turbines from one supplier and dozens of inverters from another. But it overcame these challenges with help from Spirae, LLC—a company that develops solutions for integrating renewable and distributed energy resources within microgrids and power systems—and its WAVE Microgrid Control Software.

“What we do is make microgrids plug and play from an equipment perspective, so you can use any manufacturer’s equipment,” says Sunil Cherian, founder and CEO of Spirae. “Our mantra is localize, personalize, and decarbonize. This means configuring and operating local equipment to maximize renewable energy via digitization.”

By standardizing and streamlining the configuration, deployment, and operation of microgrids, the company’s Wave solutions bring distributed energy benefits within reach for most energy consumers (Figure 1).

Digitization enables municipalities to optimize usage, reduce costs, and lower carbon footprint with renewable energy.
Figure 1. The route to the green city of the future: localize, personalize, decarbonize. (Source: Spirae)

Microgrid technology helped one community transition from 100% diesel power to more than 90% renewables.

Flexible Options Energize Smart Cities

The WAVE platform enables providers to customize applications specifically for the needs of different markets—such as utilities, communities, DER manufacturers, facility managers, residential, and others. Energy customers can evaluate equipment options and estimate returns on their investments by simulating and analyzing potential projects before proceeding.

Cities and municipalities can deploy a distributed energy system with a plug-and-play solution that enables them to:

  • Offer a range of energy services to residents and businesses aligned with their energy policies.
  • Offer standardized microgrid solutions that most closely matches the specific needs of energy consumers—including buildings, campuses, communities, and industrial applications.
  • Leverage a configurable and expandable library of microgrid types, DER Assets, use cases, and value propositions.
  • Manage microgrid deployments from concept to operations across a common platform.
  • Automatically calculate metrics and generate reports for end-user and community stakeholders.

Powering an Industry With the IIoT

Spirae incorporated lessons learned from deploying a wide variety of microgrids into standard templates for use with other energy customers. “Microgrids can be deployed for a whole range of applications using almost any distributed energy resource,” Cherian says. “That brings a tremendous amount of flexibility to energy consumers.”

Because renewables can displace fuel cost over the life of a project, municipalities can achieve carbon-neutral power generation and lower costs at the same time. But this requires intelligent software to handle the management of those renewables and eliminate the challenges of interoperability, variability, and intermittency.

“A field-proven control system like Wave will solve the typical problems that people are concerned about,” states Cherian. “Power flow fluctuations due to renewables and EV charging need not be exported to the grid if you have the right systems in place.”

Of course, running these microgrid applications at the edge requires high-performance computing. Spirae relies on Intel® technology to power the on-premises IoT gateways and back-end infrastructure. The gateways that run on-premises control software interact with the cloud-hosted WAVE platform, which runs on Intel-based server-class systems. And Spirae can scale up its microgrid solutions via a virtualized compute environment.

“Intel in particular offers solutions that make it much easier to manage this type of edge-to-cloud infrastructure,” says Cherian. “These are the touchpoints for us—where its products and solutions really come into play in a big way for rolling out distributed energy solutions.”

In the end, microgrids bring energy closer to the consumer, whether it’s solar on the rooftop, energy storage systems in a house, or an electric vehicle with a smart charger. These types of systems are becoming more pervasive and less expensive. The options are substantially increasing.

“We believe microgrids are a complete game changer,” says Cherian. “You can deploy this technology anywhere in the world for a whole range of applications. It allows clean-energy solutions to be tailored to meet any energy consumer’s needs.”

Elkhart Lake and Tiger Lake Revealed

A conversation with Christian Eder @congatecAG

[podcast player]

The Intel Atom® x6000E series and 11th Gen Intel® Core processors—formerly known as Elkhart Lake and Tiger Lake, respectively—are packed with major new features for edge applications. From completely reworked I/O to massively upgraded graphics engines, there is something for nearly every embedded and IoT application.

Join us as we talk to Christian Eder, Director of Marketing at Congatec, about the best ways to deploy these new capabilities. We discuss:

  • Why Intel added an ARM processor to its CPUs
  • How the new chips enable Time Coordinate Computing (TCC) across distributed systems
  • How the upgraded GPUs can power AI applications
Apple Podcasts  Spotify  Google Podcasts  

Transcript

Christian: We’re absolutely surprised about the performance. This is a single chip system on chip. And it does provide, you really see, at least from the first feeling here, it does provide the performance of the two-chip high power versions from the past year.

Kenton: That was Christian Eder, Director of Marketing at Congatec.  And I’m Kenton Williston, editor in Chief of insight dot tech. Today we are taking a close look at the latest Intel Atom and Intel Core processors, formerly known as Elkhart Lake and Tiger Lake. There is so, so much to talk about with these new chips, so let’s get right into it.

Kenton: So Christian, welcome to the show. Can you tell me a little bit about who you are and what you do?

Christian: Yeah. My name is Christian Eder, as you mentioned. Thank you for getting the chance here to talk to you here on a podcast, my first experience. So I’m the Director of Marketing here for Congatec, a company which is dedicated for embedded computer technologies, mainly on computer modules. I’m also quite active here in different industry standards here to standardize new form factors, as I was COM Express, and now I’m COM-HPC. And also on SMARC, so I’m quite active in this embedded scenario.

Kenton: Great. That’s great. I’m going to want to get into a little bit of conversation about some of those form factors here in a moment. But first, I want to get from your perspective, what this whole podcast really is about, which is the new Elkhart Lake series of processors. From your point of view, what’s so exciting about these new processors? What makes them different from what we’ve seen before?

Christian: Yeah, of course, those processors are perfect fit for the computer modules. And also for the single-board computers we make and this new X6000 series of Intel Atoms. Of course, with the smaller structures of 10 nanometer here. It’s got a quite increased compute entity. Although power consumption is still quite low, it’s always shrinking. So we get more performance at the same power envelope, which is great here. So six to 12 watts or so is very good for us. We have four CPU cores, which is good for running things in parallel here, multithreading things. And especially the graphics here with up to 32 GPU cores. So it’s going to be a significant help also towards CII stuff here. Because GPUs can be used not just for graphics here. And of course, a lot of I/Os went quite fast here. So it’s the first time we have PCI Express in generation three. Although next generation or generation two of USB 3.1, and with the UFS 2.0 have faster flash technology on our modules here.

Yup. I think that’s quite a lot not to forget, of course, about the fast memory here. I like to see this feature with this inbound ECC. In the past, we have different modules with ECC RAM, or without ECC RAM. And now you can choose it with this embed ECC, if you want to utilize it. Of course, it costs a little memory space here. Get more security for or you don’t have to, we don’t need different versions here. It’s both are possible here. And for industrial use, maybe the biggest step here of all is real time capabilities. So it’s Time Coordinated Computing. The TCC is implemented here in the CPUs which is really ideal here for rocket, industrial, motion control hardware, whatnot. And it’s also, we utilize CI two to five Ethernet controller, which provides TSN, time synchronized networking. So real-time, not just in computing, also real-time in communication.

And all of this together, of course, is a great platform here, when it comes to real-time operating systems. And even this, let’s say, can be used for… how do we call it, the hardware consolidation. So we the four cores, we can install multiple operating systems, even real-time operating systems, and run those in parallel. We use this real-time hypervisor from our various systems here to utilize this. So bring multiple platforms together on a small low power Atom platform. So all of this becomes possible.

Kenton: It’s quite a long list of things. I think Intel is appropriately positioning this as one of its first really, truly embedded oriented chips, versus what we’ve seen in the past few things that are really good fit for embedded and IoT kind of systems, but have been fundamentally derived from a PC-oriented platform. I mean, the list of features on this thing it’s like, this is clearly something meant for IoT applications. Would you agree with that? This is really a pretty different approach from Intel in terms of how custom-tailored this is for embedded?

Christian: Yeah, absolutely. So you fill it in each and every little feature, let’s say is this thought with industrial use in mind. And the whole feature set is really perfect for industrial users. And you’re right, I mentioned a lot of features. What goes on top always here, when you think about industrial is the extended temperature ranges here. So industrial use is not office use, or if the office a little warm, a little colder, I feel uncomfortable. But if you’re out on the street or in a factory where not, so you have really much tougher environment. And you see this just from the spec here, the use cases, the temperature ranges are for industrial use for 24/7 operation. So that’s the big difference, even if you don’t see it in a first few, if you just look to the features. But let’s say the industrial use conditions are challenging. And this is clearly addressed by this new platform.

Kenton: Yeah, absolutely. And I think some of the other features you mentioned that really stand out to me, one of them is all of the capabilities around real-time. And in particular, around real-time communications. And this is something I think we’re seeing a lot of interest at the moment. Of course, just in general to have an industrial IoT type of application with the critical elements there. Of course, is the communications element. That’s been true for some time. But now that whole concept is really evolving, I think. And we’re seeing a lot of interesting things like what you mentioned TSN, and not even necessarily over existing network, but over new networks, such as, for example, private 5g networks within the factory. So how do you see that feature, or feature set, I should say, perhaps of the real-time communications being utilized?

Christian: Yep. We have tons of applications here, when it comes to words, motion control, robot controls, that’s always real-time critical here. But it’s not limited to this use cases. Also, if you want to test the measurement area, so you have to capture the data when it occurs. And there is no second chance to do it. If you miss the sample, it’s lost here. And that’s critical here for a lot of medical applications as well. And in the past, there was a lot of dedicated hardware around to make, let’s say real-time capabilities. Now it’s all built-in and you can do quite a lot. We see this also from everything, which is robotic. We can bring things together.

I mentioned before this real-time hypervisor, where we can bring multiple operating systems together. So we can have the real-time tasks just installed on let’s say, a single core, which takes care about, let’s say, the emotions here of a robot. But a robot, of course, nowadays also needs to have some eyes, let’s say to look around here if we have cameras attached, to do some AI analytics here with the pictures he’s capturing here. And this can be installed in parallel on the other cores. Of course, we still talk about an Atom. So don’t expect tremendous frame rates, but there are so much smaller applications, whereas the Atom performance, which did grow quite a lot, I have to admit. But where this application’s performance level is more than enough.

Kenton: Yeah. Can you give me an example of that kind of application?

Christian: Yeah. I’d say if you go into face recognitions, also. We have a sample here running a real-time pendulum, let’s say keeping the balance and at the same time it says face recognition. Running it on let’s say FPGA accelerated systems, we have about 300 face recognitions per second. But if you only want to recognize, let’s say, is the user in front of me or is this the right user in front of me, just as a terminal operation, you don’t need to have 300 frames a second. So three or five frames a second will do the job easily here. And that’s the type of applications. We don’t analyze, let’s say a whole stadium of persons here with an Atom. That needs different performance levels. But this stuff is possible here on the smaller scale and small applications or point of business, let’s say. How much larger quantities usually sends a really high-end applications.

Kenton: That’s a great example and just to explain to our listeners a little bit about this pendulum demo. This is something that I have seen. I think is a really great illustration of the real-time capabilities. I’ve seen it in the context of older platforms. But the idea here is you’ve got an inverted pendulum. In other words, the axis of rotation is below the weight, which is extended upwards on arm. And so you’ve shown me before in person even a demo of this thing where you can back the pendulum around and the system will restore it to its upright state. And you can do other things in the meantime beyond that balance of the pendulum, really showcasing the ability to do multiple real-time workloads in parallel.

And what I’m wondering is, you mentioned about having the four cores, how the capabilities of the current processor, in terms of its real-time capacity that just generally the performance capability, how does that compare to prior generations, and is this like an evolution or more of a large step up?

Christian: Yeah. Let’s say in between here, it is a performance of about 50% plus or so as that’s of course, you clearly can feel it. But maybe to come back to this little demo you mentioned before. So this pendulum needs to be controlled, and the computer has to react, let’s say each millisecond or even more often, otherwise, it will lose its stability, it will fall down. If you skip some samples here, it will fall down. That’s the thing. This was installed on just one core. Then we used another two cores to do the camera face recognition.

Here, we use the Intel OpenVINO platform, which is great. You can even utilize the GPU to accelerate here the recognition algorithms here. And so it was one more core left and this core was the connectivity core here. This drives the connection here to the cloud and also acts as a firewall to keep communication on a secure level. So I believe this is a quite typical example of what you can do with this four cores using one for secure communication, use another one for your real-time task, and then you have still two more left for your user interface or for your extra computing on top.

Kenton: Yeah, I agree with that. And I think you highlighted something about this earlier, which is the idea that individually, none of these tasks are necessarily all that heinously difficult. But being able to do them all on one platform is quite advantageous in terms of having a system that is lower costs, lower size, lower power, and presumably, could even be a little bit easier to design. Would you agree with that?

Christian: Yes. That’s the whole idea of this hardware consolidation. In the past, it used to be three different boxes being wired up with some cables, and some Ethernet switches and whatnot here. So you can bring all sorts of applications together in one tiny low power box. So there’s no chance to rewire some cables and to switch out the firewall, for example, just for test purpose, and you forget to switch it back on again. All of this is no longer possible here if it’s all together on one platform. And of course, it’s a tremendous safe of hardware cost here. So it’s just one system, much easier maintenance and everything together on one platform. So I totally agree this makes sense. And we will see more and more of those applications. It’s not a new technology. So there are lots of life installation in mission critical applications even. And that’s its job for quite a while.

Kenton: Yeah, exactly, exactly. In some senses, I would say just from the raw CPU point of view, this is a noteworthy improvement. But it seems like a lot of the other capabilities that surround the CPU are really what make this platform interesting. And one of them that you mentioned that the GPU performance has really tremendously improved. And I believe you can take advantage of that with the OpenVINO platform you just mentioned. Is that right?

Christian: Yeah, absolutely. So this gives really a boost here and this allows to operate AI algorithms at a reasonably good speed. I have to say it always depends on the details and on the complexity of the task. But let’s say average task can be performed quite well here. Of course, the GPU can also be used to drive a display. Don’t forget about this one here. And this 4k is fully supported up to 60 hertz, two channels here. So that’s great for Atom platform. But as even more advantage in this AI acceleration.

Kenton: Yep, exactly. We’re not talking about gaming PCs. So the GPU is not important from that perspective. It’s great. It’s great for AI.

Christian: It’s good, but it’s not in the gaming class.

Kenton: Well, I don’t know. You probably can at least do running on this.

Christian: Yeah. I’m pretty sure it will run nicely if you find the version which runs on Windows 10.

Kenton: There you go. I’ve seen it run. You wouldn’t believe all the places that I’ve seen people run. I saw somebody actually put it on a pregnancy tester that had a little LED screen. So people have done all kinds of crazy things there.

Christian: I’m pretty sure it is, but honestly of course, if you have older operating systems. Of course, with newer platforms, it’s not all of the oldest operating systems are no longer supported. And even those operating systems are no longer supported by the operating system vendors. So of course, you have to step up changing here to the latest and greatest hardware platforms. Sometimes you also have to step up with your software levels here, especially with the operating systems. But this is also security point here. It must be done though I think there’s not much way around.

Kenton: Yeah, for sure, for sure. And I want to talk a little bit more, in fact, about that multi-OS approach you just mentioned. Because I think as we’re looking at moving forward with more and more designs that have a lot of complex combinations like the case we’ve been talking about with motion control, some computer vision, some pretty sophisticated communications, firewall capabilities. People will, of course have all kinds of existing equipment in the field that may need to bring over their legacy applications onto this thing. And those legacy systems may be running on a bunch of different OSs. You’ll get these because like we’ve been talking about the existing systems may be in three or four or five boxes. And then of course, people might want to use a more modern operating system for all the reasons you’ve mentioned. So how do you actually go about pulling together all these disparate systems on old and different OSs onto a new platform.

Christian: This hypervisor supports almost any operating system. And if you have, let’s say the pressure or you have no chance to change to a new operating system because maybe your source code from your application was lost or whatnot here, then it’s still possible to run older operating system behind a firewall, let’s say. If it’s a hypervisor system, running an old operating system, and you still can maintain this firewall. So it’s not directly connected to the outside world.

This firewall which might be a small footprint Linux, for example, can of course be kept up to date. So the security part is independently or can be installed independently from the operating system itself because the hypervisor is organized that way that it’s not just the CPUs which are dedicated to see different operating systems. It’s also the resources and the Ethernet controller, which connects to the outside world is also just a resource which will be attached to the security operating system or to this firewall Linux. And from there all other communication to the other operating systems happens by internal virtual Ethernet connectivity, which means none of the operating systems can talk directly to the outside world. Each and every traffic has to pass this firewall application here.

That’s one way to maintain, let’s say even older operating systems if you have no way to work around. Of course, if you can update it to the latest version, it’s better and the better security gives it good security from the operating system itself and improves this with this extra firewall. Of course, it’s just a typical application. You can combine almost any operating systems here together. The only exception I have heard is that you can’t combine multiple windows operating systems. But this is based on Microsoft licenses.

Kenton: Yes, sometimes the lawyers and other technologies are the things that get in the way, right?

Christian: Mm-hmm. Right. In this case it’s not a technical limitation. You’re right.

Kenton: Well, that’s really cool. And I have to make sure I mentioned the hypervisor capability you’re talking about, that’s part of what Congatec offers, right?

Christian: Actually, this hypervisor is created and maintained from a company called Real Time Systems. And it’s an independent company, but it’s owned by Congatec. So it’s not just running on Congatec hardware. It runs on any x86 hardware. But of course, as we do quite intense tests and work quite closely as the engineers to get together, especially in the testing phase of new products, we always ensure that the combination of Congatec hardware and real-time hypervisor runs perfectly.

Kenton: Perfect. That makes sense. So just to kind of recap what I’ve heard so far, when we’re talking about the Elkhart Lake platform, which is also the new Atom 6000 family, and as well as some other Celeron and other brand names that go along with that. So some of the things that make this pretty interesting and different from what’s been around before include, you’ve got up to four cores, with a pretty significant improvement in performance from previous generations. You’ve got, of course, the GPU, which is I think, quite a dramatic improvement and useful not only for creating visuals, but also doing things like executing AI. You’ve got some improved IO, that’s going to be really useful for IoT use cases like PCIe, the faster flash, the ECC memory. And I think all these things are pretty amazing.

I have to say one of the things that is the most surprising to me is that there’s this new thing called the Intel programmable services engine. And I’d like you to explain a little bit about what that is. And it’s particularly unusual and surprising to me in that it’s built around a dedicated ARM microcontroller, which is quite a new feature compared to what we’ve seen before. So what is this programmable services engine about?

Christian: I’m not the engineer behind this. But my understanding is that a PSE is a, let’s say a small ARM controller, which is already integrated in the CPU, which can give a helping hand to do some, let’s say parallel instances. This is something what let’s say, Congatec started to do about 15 years ago. So each and every module and board or equip is a little microcontroller. In fact, it’s a small ARM core in the meantime, which helps here to do some embedded task like system monitoring. So this is completely independent here from the CPU. It helps to boot up, so as to do the power sequencing. It for example, generates the I2C bus, which is not a standard feature of the CPU. And this does not take compute performance from the CPU.

All of this, or the watchdog timer, for example, is completely independent from the CPU. All of this is integrated in this little ARM controller. And my understanding is that you can do same features or similar features, maybe even more here on this integrated little controller here on the PSE. So right now, for us, it’s important that we have the same features as implementations throughout all of our platforms. So right now, we still use our ARM implementation of this which is successful and well known, as mentioned over 15 years. But of course, if so, there’ll be more and more features like this in upcoming new platforms here from Intel. Sure, we think about to bring this to the advantage of the whole system. Right now we even using CPUs here to improve our out of band management, which means let’s say is improve remote control here, or it means the CPU is not completely up to have a certain access here to the CPU for manageability. But right now this is a forward-looking feature, not yet completely utilized by platforms, but I’m looking forward to get more of this in the future.

Kenton: It seems like in a lot of ways, this is providing to your point what would have required previously a companion chip to do the low-level management of the board. And so for example, one of the things you mentioned was the out of band management, which I think is a pretty neat capability here, which allows you to remotely monitor the status and fix things should something go wrong on your system.

Christian: Exactly. And it’s another clear sign. So, this is a typical embedded featured to have a little embedded controller here. And that’s another clear sign for this embedded thinking of this new platform.

Kenton: Yeah, absolutely. Absolutely. I want to go back and talk a little bit more about some of these real-time topics. We’ve talked a bit already about the significance of having all of these real-time capabilities on board, for example, having different cores, doing things in parallel in real-time, a little companion ARM engine, doing other things in real-time. But I think, just as important, and we’ve touched on this just a little bit are the off-system real-time capabilities. In other words, what I’m saying is you’ve got the real-time control, but then you’ve also got the real-time communications elements of it. And I wonder if you could say a little bit more about what your customers are asking for there and what you’re providing. What you see as being new in terms of having that time coordinated computing, being able to do things in a distributed fashion at a cost of factory, or whatever the context may be.

Christian: Yeah, the big advantage of the TS and the time synchronous network is that you can utilize your existing let’s say Ethernet infrastructure, or you can at least use the cables that you have to upgrade some switches and whatnot. Let’s say in a nutshell is you have a standard Ethernet cable, and you reserve part of the bandwidth of this cable for real-time traffic. So we’ve done a demo here at the trade shows where we, let’s say, had a traffic generator, which can really overload the cable. And in parallel we’ve reserved about 20% of the bandwidth. Let’s say it’s 800 kilobits here, just for normal traffic, let’s say, streaming videos, and whatever you can do once on that. And for the real-time control to communicate with the other robots, let’s say from robot to robots, things must be in real-time.

There was a reserved channel bandwidth of about 200K. And no matter how much streaming traffic we put on to it, there was no really recognizable jitter, delay, or it just went smoothly through it, while the other channel was completely overloaded, and the video was no longer running because the channel was so full. So which means you can use the existing infrastructure to bring this infrastructure to a new level to share the existing bandwidth between normal and real-time traffic. And I believe that’s a big advantage over let’s say, a lot of different existing filter standards, which are all towards real-time communication, but all leads are old infrastructure. Here the big advantage of TSN is the existing infrastructure. And we see a big demand since each and every let’s say automation company is working on or already hasn’t implementations here for the TSN.

So it’s still a standard where some details are in definition, it’s not yet completely done. But most of it is done and is implemented and can be used nowadays. So the Ethernet controller itself is one part. But of course, you need to have the rest of the infrastructure Lexus which solely need to be TSN capable. And to be honest, so the complicated part of the whole TSN is in fact the configuration which must happen here on site, depending on your individual infrastructure here. Other than that, as the use of TSN is quite simple. So the demo we set up here for the trade show was not too complicated.

Kenton: Very cool. Very cool. I mean, this all sounds great. But I want to ask about how developers and engineers can get started most quickly and effectively and taking advantage of all these new capabilities. What would you recommend?

Christian: If you want to play here, or start working here with Elkhart Lake or with the new Atom series X6000, of course, the easiest ways is to do it on a single-board computer. So we have implemented this on quite tiny PicoITX board, which is just 72 times a hundred millimeters with two Ethernet ports on to it. It’s just plug and play, you can start immediately. Of course, we offer this also in different other form factors depending on customers let’s say experience or on their design history. So we have it on COM Express. The very tiny COM Express mini also on the COM Express compact here. So two flavors, which let’s say more or less I/Os and more less real estate here for the module. But we also maintain the Qseven standard share, it’s available on Qseven and on this SMARC module standard. If it’s five different implementation of the same form factor, I think there’s one for each flavor here. So it’s a really wide selection offering here.

So I believe the easiest would be starting with the single-board computer with the PicoITX or if you want to start into one of the module form factors, depending on your IO needs, one or the other might be best for you. We offer a complete carrier boards, reference carrier boards, or evaluation carrier boards, where the customer can start immediately to test it. And as an extra goodie on top here, for all these carrier boards, we provide also these schematics to the customers. Based on those schematics, customer can start its own carrier board design very quick and easy and use this as blueprint and add his own stuff he likes or he needs and remove some stuff which is not required for this application. And that’s a quick start into a whole development.

Kenton: And that’s great. So I think this flexible approach would make the boards and modules we’re talking about flexible enough to be suitable to just about any kind of application. But I’m wondering if there are any particular areas that you think would be well suited to these solutions? So I’m thinking, for example, we’ve talked about some like industrial control, some visual applications like digital signage. So what are some of the target applications where you think these solutions would be particularly well suited?

Christian: Of course, a lot of use cases in the medical environment, but even some things you don’t think about it in the gaming industry or even in the audio industry. Now, we’ve a podcast here. This audio recording and audio processing is quite challenging for real-time. So we have in fact customers which provide professional audio equipment based on computer modules here. And of course, they take advantage of this real-time capabilities. So overall, you see these things all over the place, when it comes to, let’s say graphic outputs, something like digital signage in trains, in the airports, whatnot. There is graphics capabilities and the high resolution here of this Atom is quite helpful. Also, the low power consumption is just always helpful here in each and every environment. You name it, the customers usually surprise us.

So the beauty of the computer module is that everything compute-oriented is there and the customization to the single applications or business segments that’s happened on the carrier board. And this is absolutely flexible. But something like a customized solution out of the box or it’s something between one part is out of the box. The other part is customized. But it’s much much easier before doing everything from scratch to utilize one of the existing computer modules here, or even a single-board computer it’s even easier and you don’t have to do a carrier board.

Kenton: Got it. That totally makes sense. So I want to make sure I touch on one other thing which is we’ve talked so far about the new Atom processors but Intel has also announced their latest and greatest in the Core series, the Core 11. Also known as Tiger Lake. So can you tell me just in brief, what is new with Tiger Lake?

Christian: Yeah. For Congatec, it’s a very industrial-oriented company. So the biggest thing is the industrial use case here. It’s the first times that we have Intel Core processors, which are fully specified for minus 40, up to plus 85 centigrade. So this wide range, extended industrial, whatever you call it, temperature range. And this is, of course, on top of all of the architectural and performance levels, this is for us the most important point.

Kenton: Yes, absolutely. All the performance features in the world are not so useful if part can’t take the heat.

Christian: Right, right. Of course, it’s a very good power envelope. And we’re absolutely surprised about the performance. This is a single chip system on chip. And it does provide, you really see, at least from the first feeling here, it does provide the performance of the two-chip high power versions from the past year. So especially the graphics went up quite a lot here, which is, as we mentioned it before although here for the Elkhart Lake which is a very big step to enable a more power-hungry AI applications here on this platform.

Kenton: Yeah, exactly. And so I’ve been seeing from the consumer side of things, reviews of the Tiger Lake, 11th Gen Core family, and people have been very, very impressed with the performance they’re getting out of the graphics engine, really citing it as being just a huge leap in performance from the previous integrated graphics solutions. And so, I would think from an embedded/IoT perspective, I mean, of course, there’s going to be applications like if you have like a video wall or something that’s could be interesting for but also, like you said, from the perspective of using these capabilities for AI using the OpenVINO platform is a big big boost in performance. So like you were saying with the Elkhart Lake where you can do modestly challenging things. I would imagine with Tiger Lake with latest Core, you could do really quite a lot of AI processing on that platform.

Christian: And of course, the IO performance, so it’s the first time we have PCI Express Gen four. So that’s another big step, another double performance step here comparing the Gen three so which already is a great performance here and we get the USB4 with Thunderbolt here. That’s also a tremendous step here in performance what you get in there. Let’s say I’m really proud as being the chairman of the Technical Committee here for COM-HPC. The Tiger Lake here is first COM-HPC board we announced. So actually, it’s a big step to have it on COM Express. Of course, to maintain what’s there. That’s the most successful form factor COM Express ever on the modules, but it’s a first COM-HPC which starts on top of COM Express. So for example, on COM-HPC we have support for USB4. On COM Express there’s no USB4 support possible. So pins are simply not defined, and there’s not enough space to get it out.

Kenton: Yeah. And so I would say, kind of wrap this all together, it feels to me like this is an interesting juncture. I think you and I both have been in this industry long enough that we see periods of the industry where every announcement of a processor is presented as something really revolutionary and this pretty dramatic performance improvements and so forth from generation to generation. Here in the last couple of years, it seemed to me, like it’s been more evolutionary and a little bit less excitement generated around some of the newer processor releases, but I feel like both in terms of some of the individual features like some of the I/O, we’ve been talking about some of the graphics capabilities, this programmable services engine. Now there are some features that individually are quite remarkable, but it feels to me like in a lot of ways these two platforms, Elkhart Lake, Tiger Lake both are culminations bringing together a lot of different important technologies that individually may or may not be that exciting but really together feels like they add up to a pretty dramatic set of improvements. Would you agree with that?

Christian: Absolutely. So it’s major steps here on the low power side with new Atom on the let’s say higher power envelope here, higher performance level here with 11th Gen first-time industrial. And having all of these on modules you can upgrade and bring this to the applications very fast. And this is the whole idea behind Congatec to bring this technology very simple and easy to the customers. Because the most important thing nowadays, I believe is the time to market. So the faster or the better we can support a customer, the fast size application will be at the market and the more successful it will be for the customer and for Congatec.

Kenton: Absolutely. All right. Well, before we run out of time, I should give you an opportunity if there’s anything I haven’t been clever enough to ask you. Is there anything that you’d like to add?

Christian: Actually, we talked quite a lot. I think we went through almost everything I had on top of my mind. So I think with these two launches, so we have a complete refresh here of the whole platforms. And I’m pretty sure each and every customer will find the advantages to step up here to this new technology and performance levels.

Kenton: Yeah, absolutely. I totally agree with that. Like I said, I think just across the board, there’s so many significant new capabilities and features here that there’s going to be something for everyone to really benefit from. So that just leaves me to thank you, Christian, for joining me today on the podcast. Really appreciate your time and perspectives.

Christian: Thank you, Kenton. It was my pleasure.

Kenton: And thanks to our listeners for joining us.

This has been the IoT Chat podcast. If you enjoyed listening, please support us by subscribing and rating us on your favorite podcast app.

We’ll be back next time with more ideas from industry leaders at the forefront of IoT design.

Cloud-Enabled IoT Gives System Integrators an Edge

Companies today often have two objectives when it comes to improving operations: increasing innovation while decreasing costs. It can sound like an impossible combination, unless you look to the cloud. The IoT is rapidly expanding the scope of what’s possible. By linking proven edge technology with plug-and-play cloud services, companies can forgo the expensive design and implementation of elaborate systems and get to the next level. And systems integrators can guide their way.

For example, an SI working with a hospital in China uncovered a new opportunity to help the facility streamline its operations. The hospital wanted a better way to deliver meals and medication to patients by using robots. Not only does the new system automate the process; it keeps patients and hospital workers safer by reducing exposure and touchpoints.

In another case, a retailer wanted to maximize the return on investment from the digital signage it had throughout its stores to create better shopping experiences and new marketing opportunities. But while the SI specialized in retail and had expertise in selling digital displays, it was not familiar with how to connect the system to the cloud. In this case, the SI needed help in upgrading the digital displays, deploying all the needed in-store hardware, and adding cloud support as well.

Edge-to-Cloud IoT Projects = New Opportunities

Hospital robots and smart retail signage require complex infrastructure and domain knowledge to execute. In the past, SIs may have had to walk away from opportunities due to a lack of expertise. While they may have had a deep understanding of a specific vertical market segment, such as retail or healthcare, they often lacked the resources or knowledge needed to provide a complete solution for the customer—until now.

Today, SIs can turn to solution aggregators that can help them better serve their customers’ fast-changing business needs. Not simply product distributors, aggregators are value-add partners that can help SIs grow their businesses and broaden their vertical coverage by offering information and access on new technologies and services.

Digital China, for example, is a provider of the latest edge solutions and cloud services. The company has a depth of knowledge that gives SIs and end customers access to customized cloud management services and software as a service (SaaS). Its deep relationships with cloud service providers such as AWS, Alibaba, Microsoft Azure, and others enhance the company’s ability to lead the way in delivering edge-to-cloud solutions.

Digital China can provide SIs information and operations technology solutions across many verticals. In fact, it is the aggregator that stepped in and helped the SIs design the hospital robot and AI-enabled interactive digital signage systems, with a technical and service staff to provide expertise and support.

“Looking forward, Digital China will continue to build our technical team for Intel® related programs, which along with our Intel partnership can help SIs acquire new business quickly,” says Lihua Chen, Marketing Director, Microelectronics System SBU, Digital China Group. “And we will work with the SIs every step of the way.”

SIs and end customers have access customized cloud management and SaaS services thanks to Digital China.

IoT Solutions Belong in the Cloud

Instead of buying solution components from different vendors, working with Digital China to implement cloud-based solutions can be a better way for SIs to go. Systems are not limited to one physical location, and scaling is often as simple as a quick download of software.

SaaS models maximize operations by letting businesses increase cash flow without tying up capital expenditure through software subscriptions. The SaaS market is forecasted to grow to $116 billion by the end of 2020 due to scalability, and SIs can get in on the trend and be ahead of the competition when they partner with an aggregator like Digital China.

Partnerships Are the Key to the Future

One of the ways solutions aggregators stay current with the pace of technology is through strategic partnerships. For example, Digital China leverages ready-to-deploy, end-to-end cloud platform solutions such as Intel® IoT Market Ready Solutions and Intel® IoT RFP Ready Kits, helping SIs solve customers’ pain points with proven concepts.

Aggregators like Digital China help SIs serve an enterprise’s full IT environment lifecycle, from deploying the latest hardware to managing software releases. In the IoT world, cloud services are relatively new, and the company will continue to accelerate in this area.

A rising tide lifts all boats, and the aggregator-SI relationship is a partnership for the future. By working as a team, they can better service customers and acquire new business opportunities together.

Industrial Summit Review: Tech Spurs Growth in Tough Times

This has been a challenging year for all of us, following a pandemic that shook the world. The global closure of all things business, and personal, has shown us all just how much we rely on technology only to function—let alone function normally.

With this said, technology is driving the world forward at such a lightning-like pace that it can be hard to imagine how any business could catch up. This is why it was so refreshing to see what Intel had in store for us at the Intel® Industrial Summit 2020. (The recorded sessions are available for your perusal. Check out this guide to the Intel Industrial Summit 2020 for tips on what to see.)

A two-day global summit focused on Industry 4.0. The event showcased the latest advancements in manufacturing and distribution, including significant new chipset announcements. Speakers from industry leaders like Siemens provided insights not just from a global perspective but with a regional focus as well—which in my opinion is vital.

Inspiring Keynote

Right from the get-go, the keynote from Tom Lantzsch, Sr. VP and GM of the Intel IoT Group, was highly educational. He explained the past three years of the company’s strategy for the Industrial Internet of Things (IIoT). Back in 2017, Intel started to move into the world of High-Performance Computing, looking at the way data is used at the edge of any network—critical to every manufacturing and distribution business.

In 2018, Intel released a suite of toolsets that enabled the developer community to tap into edge data like never before with the release of the OpenVINO toolkit. Now engineers could receive a multitude of insights from their previously untapped data from the edge. In 2019, Intel scaled these capabilities within verticals and across its partner network.

IIoT & industrial expert @neilcattermull wraps up @IntelIoT’s Industrial Summit. via @insightdottech

Now fast-forward to today. Intel has not only a comprehensive development kit for edge devices, tested within multiple market verticals, but also the software and hardware to accelerate business automation and cost reduction (much needed at this time).

Though many have been working toward the Industry 4.0 model, some have sat back and watched early adopters. Now we can all stand to attention and truly take note. This is a must-do move, not a “let’s see who adopts first” approach. Adopting Industry 4.0 principles not only produces a streamlined cost-efficient business, it creates a more sustainable approach when any unexpected event occurs—the whole of 2020 as a great example!

As noted earlier, the event also featured talks and demos from numerous Intel partners. For more on that, see the insight.tech coverage of must-see tech at the Intel Industrial Summit 2020.

Impressive New Intel® Processors

Other noteworthy information included the release of two new chipsets, designed with IIoT and smart automation in mind. First, the next-generation Intel® Atom® processor, boasting nearly double (1.7x) the single-thread performance and up to double the graphics performance.

The second chip was the IoT version of the 11th gen Intel® Core processor. These CPUs deliver up to a 23 percent improvement in single-thread performance, a 19 percent boost in multithread performance, and nearly a three times jump in graphics performance over the last-gen processors.

Both chipsets announced at the Intel® Industrial Summit were designed with the IoT in mind. Industries such as manufacturing, transportation, retail, healthcare, and hospitality will benefit significantly from the chipsets, along with the edge application software that Intel provides.

Audi Showcases Automation

But the proof of the pudding is in the eating (as we say in the UK), so I was eager to see the session that demonstrated how Audi had adopted the Industry 4.0 Smart Factory approach with Intel. Audi teamed up with Intel to streamline production of its vehicles, making its factories more efficient while reducing labor cost by up to 50 percent.

This all started with checking welding spots on the car chassis. More than 5 million welding spots are conducted every day over its manufacturing plants, and this checking mechanism was a very labor-intensive task. With the help of Intel, Audi automated this process, and the cost savings speak for themselves.

In summary, the Intel Industrial Summit 2020 was enlightening in more ways than one. Listening to regional representatives who have implemented smart factory solutions hit home for me. Data security, local legislation, speed of access, and economy of scale were but a few challenges mentioned. With the release of the new chipsets from Intel, coupled with the supporting hardware and software stacks from Intel and its partners, these issues become muted.

For those of you reading this article and who are still debating moving to an Industry 4.0 model, take some time and watch the on-demand replay of the Intel Industrial Summit 2020. The technology has evolved. Isn’t it time your business did the same?

Must-See Tech at Intel Industrial Summit 2020

Want to see the latest IIoT technologies, tools, and solutions in action? Then be sure to check out the Intel Industrial Summit 2020. You’ll get an inside look at transformational Industry 4.0 use cases—powered by the latest Intel IoT hardware and software—by visiting virtual booths and seeing live demos.

With more than 40 partners across North America, Europe, and Asia, you’ll see the latest Industry 4.0 inventions, based on AI, machine vision, 5G, TSN, and more. The event is a great opportunity to discover how developers are taking advantage of software tools like Intel® Edge Controls for Industrial (Intel® ECS) and Intel® Edge Insights for Industrial (Intel® EIS) on the latest Intel® processors.

Don’t miss this free virtual event, with four hours of great info over just two days—September 23 and 24.

Witness the Potential of 5G and TSN

With its low latency, high throughput, and QoS support, 5G is becoming a key factor in the IIoT. See how the combination of products from Siemens, TTTech, HiWin, Bosch, and others create industrial-grade private wireless TSNs with the help of Intel’s 5G prototype systems.

In this demo, you’ll get a first look at an end-to-end system that:

  • Guarantees TSN performance in a mixed traffic environment
  • Supports TSN synchronization with 1 µs accuracy over a 5G system
  • Integrates with endpoints running ECS and EIS stacks

Speaking of TSN, Kontron will show off its TSN extension solution—a starter kit that offers easy entry into Time Sensitive Networking. The HW/SW upgrade system works with existing PLCs, IPCs, gateways, and industrial servers—simplifying integration of endpoint devices into a TSN network.

Get an inside look at transformational Industry 4.0 use cases—powered by the latest Intel IoT hardware and software.

Discover the Potential of Industrial Tools

To find out how to get a leg up with your AI and ML development, take a look at the Axiomtek AI Starter Kit. You’ll see how the company uses new features of the EIS Training Module with the Intel® OpenVINO Toolkit to accelerate industrial vision optimization at the edge.

For even more ideas on what you can do with EIS, IEI Integration Corp is showing a product inspection data flow model from EIS to the Splunk Enterprise analytics platform. You’ll see how EIS ingests, processes, and stores the image data from camera input, and integrates AI-AOI with AI algorithms that leverage OpenVINO.

Now, imagine the possibilities for innovation that come with transitioning industrial control systems to software-defined solutions. You can see what Hitachi Industrial Products is up to with its software-defined robot control concept, using ECS and OpenVINO.

While we’re talking about robots, find out how Advantech SoftPLC and SoftMotion software make it easier to control and manipulate the motion and speeds of machinery and robots. In one case, you’ll see how a modular controller streamlines factory automation with motion and vision Integration. And check out the panel controller that helps machine builders translate 3D CAD/CAM files to motion process path control.

Find New Routes to Market

How do you get all these great innovations to market? One way is with Intel® IoT Market Ready Solutions. For example, take a look at the Noodle.ai solution that enables giants in the Steel Milling (and other) industries to integrate their full spectrum of OT and IT systems. Learn how the company’s leading data science captures and centralizes data into a unified, time-and-space-aware system to detect anomalies and connect findings to machine learning algorithms.

You’ll also want to see another Market Ready Solution in action: the ABB SSC600, a smart substation control and protection solution that runs on a single Intel-powered, ruggedized controller. Take a look at how the system supports a whole range of applications in a single box. What’s more, it’s cloud managed, meaning truck rolls are a thing of the past.

The Intel Industrial Summit 2020 is the premier, must-attend event for any company that wants to remain relevant in this new industrial age of AI, autonomy, and optimization. Don’t miss this exciting, thought-provoking event. Register today!

AMERICAS: September 23-24 Register here

EMEA: September 23-24 Register here

PRC: September 24-25 Register here

APAC: September 24-25 Register here

Japan: September 24-25  Register here